query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
9ed7b32594457fb2694f1f96731a15bd | Switched flux permanent magnet machines — Innovation continues | [
{
"docid": "a2b60ffe1ed8f8bd79363f4c5cff364b",
"text": "The flux-switching permanent-magnet (FSPM) machine is a relatively novel brushless machine having magnets and concentrated windings in the stator instead of rotor, which exhibits inherently sinusoidal back-EMF waveform and high torque capability. However, due to the high airgap flux density produced by magnets and the salient poles in both stator and rotor, the resultant torque ripple is relatively large, which is unfavorable for high performance drive system. In this paper, instead of conventional optimization on the machine itself, a new torque ripple suppression approach is proposed in which a series of specific harmonic currents are added into q-axis reference current, resulting in additional torque components to counteract the fundamental and second-order harmonic components of cogging torque. Both the simulation and experimental results confirm that the proposed approach can effectively suppress the torque ripple. It should be emphasized that this method is applicable to all PM machines having relatively large cogging torque.",
"title": ""
}
] | [
{
"docid": "cb5d0498db49c8421fef279aea69c367",
"text": "The growing commoditization of the underground economy has given rise to malware delivery networks, which charge fees for quickly delivering malware or unwanted software to a large number of hosts. A key method to provide this service is through the orchestration of silent delivery campaigns. These campaigns involve a group of downloaders that receive remote commands and then deliver their payloads without any user interaction. These campaigns can evade detection by relying on inconspicuous downloaders on the client side and on disposable domain names on the server side. We describe Beewolf, a system for detecting silent delivery campaigns from Internet-wide records of download events. The key observation behind our system is that the downloaders involved in these campaigns frequently retrieve payloads in lockstep. Beewolf identifies such locksteps in an unsupervised and deterministic manner, and can operate on streaming data. We utilize Beewolf to study silent delivery campaigns at scale, on a data set of 33.3 million download events. This investigation yields novel findings, e.g. malware distributed through compromised software update channels, a substantial overlap between the delivery ecosystems for malware and unwanted software, and several types of business relationships within these ecosystems. Beewolf achieves over 92% true positives and fewer than 5% false positives. Moreover, Beewolf can detect suspicious downloaders a median of 165 days ahead of existing anti-virus products and payload-hosting domains a median of 196 days ahead of existing blacklists.",
"title": ""
},
{
"docid": "d12a47e1b72532a3c2c028620eba44d6",
"text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.",
"title": ""
},
{
"docid": "cd82eb636078b633060a857a4eb2b47b",
"text": "The importance of mobile application specific testing techniques and methods has been attracting much attention of software engineers over the past few years. This is due to the fact that mobile applications are different than traditional web and desktop applications, and more and more they are moving to being used in critical domains. Mobile applications require a different approach to application quality and dependability and require an effective testing approach to build high quality and more reliable software. We performed a systematic mapping study to categorize and to structure the research evidence that has been published in the area of mobile application testing techniques and challenges that they have reported. Seventy nine (79) empirical studies are mapped to a classification schema. Several research gaps are identified and specific key testing issues for practitioners are identified: there is a need for eliciting testing requirements early during development process; the need to conduct research in real-world development environments; specific testing techniques targeting application life-cycle conformance and mobile services testing; and comparative studies for security and usability testing.",
"title": ""
},
{
"docid": "c059d43c51ec35ec7949b0a10d718b6f",
"text": "The problem of signal recovery from its Fourier transform magnitude is of paramount importance in various fields of engineering and has been around for more than 100 years. Due to the absence of phase information, some form of additional information is required in order to be able to uniquely identify the signal of interest. In this paper, we focus our attention on discrete-time sparse signals (of length <inline-formula><tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula>). We first show that if the discrete Fourier transform dimension is greater than or equal to <inline-formula><tex-math notation=\"LaTeX\">$2n$</tex-math></inline-formula>, then almost all signals with <italic> aperiodic</italic> support can be uniquely identified by their Fourier transform magnitude (up to time shift, conjugate flip, and global phase). Then, we develop an efficient two-stage sparse-phase retrieval algorithm (TSPR), which involves: identifying the support, i.e., the locations of the nonzero components, of the signal using a combinatorial algorithm; and identifying the signal values in the support using a convex algorithm. We show that TSPR can <italic> provably</italic> recover most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/2-{\\epsilon }})$</tex-math> </inline-formula>-sparse signals (up to a time shift, conjugate flip, and global phase). We also show that, for most <inline-formula><tex-math notation=\"LaTeX\">$O(n^{1/4-{\\epsilon }})$</tex-math></inline-formula>-sparse signals, the recovery is <italic>robust</italic> in the presence of measurement noise. These recovery guarantees are asymptotic in nature. Numerical experiments complement our theoretical analysis and verify the effectiveness of TSPR.",
"title": ""
},
{
"docid": "ac740402c3e733af4d690e34e567fabe",
"text": "We address the problem of semantic segmentation: classifying each pixel in an image according to the semantic class it belongs to (e.g. dog, road, car). Most existing methods train from fully supervised images, where each pixel is annotated by a class label. To reduce the annotation effort, recently a few weakly supervised approaches emerged. These require only image labels indicating which classes are present. Although their performance reaches a satisfactory level, there is still a substantial gap between the accuracy of fully and weakly supervised methods. We address this gap with a novel active learning method specifically suited for this setting. We model the problem as a pairwise CRF and cast active learning as finding its most informative nodes. These nodes induce the largest expected change in the overall CRF state, after revealing their true label. Our criterion is equivalent to maximizing an upper-bound on accuracy gain. Experiments on two data-sets show that our method achieves 97% percent of the accuracy of the corresponding fully supervised model, while querying less than 17% of the (super-)pixel labels.",
"title": ""
},
{
"docid": "65d84bb6907a34f8bc8c4b3d46706e53",
"text": "This study analyzes the correlation between video game usage and academic performance. Scholastic Aptitude Test (SAT) and grade-point average (GPA) scores were used to gauge academic performance. The amount of time a student spends playing video games has a negative correlation with students' GPA and SAT scores. As video game usage increases, GPA and SAT scores decrease. A chi-squared analysis found a p value for video game usage and GPA was greater than a 95% confidence level (0.005 < p < 0.01). This finding suggests that dependence exists. SAT score and video game usage also returned a p value that was significant (0.01 < p < 0.05). Chi-squared results were not significant when comparing time spent studying and an individual's SAT score. This research suggests that video games may have a detrimental effect on an individual's GPA and possibly on SAT scores. Although these results show statistical dependence, proving cause and effect remains difficult, since SAT scores represent a single test on a given day. The effects of video games maybe be cumulative; however, drawing a conclusion is difficult because SAT scores represent a measure of general knowledge. GPA versus video games is more reliable because both involve a continuous measurement of engaged activity and performance. The connection remains difficult because of the complex nature of student life and academic performance. Also, video game usage may simply be a function of specific personality types and characteristics.",
"title": ""
},
{
"docid": "8020c67dd790bcff7aea0e103ea672f1",
"text": "Recent efforts in satellite communication research considered the exploitation of higher frequency bands as a valuable alternative to conventional spectrum portions. An example of this can be provided by the W-band (70-110 GHz). Recently, a scientific experiment carried on by the Italian Space Agency (ASI), namely the DAVID-DCE experiment, was aimed at exploring the technical feasibility of the exploitation of the W-band for broadband networking applications. Some preliminary results of DAVID research activities pointed out that phase noise and high Doppler-shift can severely compromise the efficiency of the modulation system, particularly for what concerns the aspects related to the carrier recovery. This problem becomes very critical when the use of spectrally efficient M-ary modulations is considered in order to profitably exploit the large amount of bandwidth available in the W-band. In this work, a novel carrier recovery algorithm has been proposed for a 16-QAM modulation and tested, considering the presence of phase noise and other kinds of non-ideal behaviors of the communication devices typical of W-band satellite transmission. Simulation results demonstrated the effectiveness the proposed solution for carrier recovery and pointed out the achievable spectral efficiency of the transmission system, considering some constraints about transmitted power, data BER and receiver bandwidth",
"title": ""
},
{
"docid": "0ca476ed89607680399604b39d76185b",
"text": "Honeybee swarms and complex brains show many parallels in how they make decisions. In both, separate populations of units (bees or neurons) integrate noisy evidence for alternatives, and, when one population exceeds a threshold, the alternative it represents is chosen. We show that a key feature of a brain--cross inhibition between the evidence-accumulating populations--also exists in a swarm as it chooses its nesting site. Nest-site scouts send inhibitory stop signals to other scouts producing waggle dances, causing them to cease dancing, and each scout targets scouts' reporting sites other than her own. An analytic model shows that cross inhibition between populations of scout bees increases the reliability of swarm decision-making by solving the problem of deadlock over equal sites.",
"title": ""
},
{
"docid": "bd820eea00766190675cd3e8b89477f2",
"text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.",
"title": ""
},
{
"docid": "e4dd72a52d4961f8d4d8ee9b5b40d821",
"text": "Social media users spend several hours a day to read, post and search for news on microblogging platforms. Social media is becoming a key means for discovering news. However, verifying the trustworthiness of this information is becoming even more challenging. In this study, we attempt to address the problem of rumor detection and belief investigation on Twitter. Our definition of rumor is an unverifiable statement, which spreads misinformation or disinformation. We adopt a supervised rumors classification task using the standard dataset. By employing the Tweet Latent Vector (TLV) feature, which creates a 100-d vector representative of each tweet, we increased the rumor retrieval task precision up to 0.972. We also introduce the belief score and study the belief change among the rumor posters between 2010 and 2016.",
"title": ""
},
{
"docid": "b9720d1350bf89c8a94bb30276329ce2",
"text": "Generative concept representations have three major advantages over discriminative ones: they can represent uncertainty, they support integration of learning and reasoning, and they are good for unsupervised and semi-supervised learning. We discuss probabilistic and generative deep learning, which generative concept representations are based on, and the use of variational autoencoders and generative adversarial networks for learning generative concept representations, particularly for concepts whose data are sequences, structured data or graphs.",
"title": ""
},
{
"docid": "1eecc45f35f693cddc2b4fe972493396",
"text": "In this paper, we reformulate the conventional 2-D Frangi vesselness measure into a pre-weighted neural network (“Frangi-Net”), and illustrate that the Frangi-Net is equivalent to the original Frangi filter. Furthermore, we show that, as a neural network, Frangi-Net is trainable. We evaluate the proposed method on a set of 45 high resolution fundus images. After fine-tuning, we observe both qualitative and quantitative improvements in the segmentation quality compared to the original Frangi measure, with an increase up to 17% in F1 score.",
"title": ""
},
{
"docid": "e471e41553bf7c229a38f3d226ff8a28",
"text": "Large AC machines are sometimes fed by multiple inverters. This paper presents the complete steady-state analysis of the PM synchronous machine with multiplex windings, suitable for driving by multiple independent inverters. Machines with 4, 6 and 9 phases are covered in detail. Particular attention is given to the magnetic interactions not only between individual phases, but between channels or groups of phases. This is of interest not only for determining performance and designing control systems, but also for analysing fault tolerance. It is shown how to calculate the necessary self- and mutual inductances and how to reduce them to a compact dq-axis model without loss of detail.",
"title": ""
},
{
"docid": "d48430f65d844c92661d3eb389cdb2f2",
"text": "In organizations that use DevOps practices, software changes can be deployed as fast as 500 times or more per day. Without adequate involvement of the security team, rapidly deployed software changes are more likely to contain vulnerabilities due to lack of adequate reviews. The goal of this paper is to aid software practitioners in integrating security and DevOps by summarizing experiences in utilizing security practices in a DevOps environment. We analyzed a selected set of Internet artifacts and surveyed representatives of nine organizations that are using DevOps to systematically explore experiences in utilizing security practices. We observe that the majority of the software practitioners have expressed the potential of common DevOps activities, such as automated monitoring, to improve the security of a system. Furthermore, organizations that integrate DevOps and security utilize additional security activities, such as security requirements analysis and performing security configurations. Additionally, these teams also have established collaboration between the security team and the development and operations teams.",
"title": ""
},
{
"docid": "186c2180e7b681a350126225cd15ece0",
"text": "Two lactose-fermenting Salmonella typhi strains were isolated from bile and blood specimens of a typhoid fever patient who underwent a cholecystectomy due to cholelithiasis. One lactose-fermenting S. typhi strain was also isolated from a pus specimen which was obtained at the tip of the T-shaped tube withdrawn from the operative wound of the common bile duct of the patient. These three lactose-fermenting isolates: GIFU 11924 from bile, GIFU 11926 from pus, and GIFU 11927 from blood, were phenotypically identical to the type strain (GIFU 11801 = ATCC 19430 = NCTC 8385) of S. typhi, except that the three strains fermented lactose and failed to blacken the butt of Kligler iron agar or triple sugar iron agar medium. All three lactose-fermenting strains were resistant to chloramphenicol, ampicillin, sulfomethoxazole, trimethoprim, gentamicin, cephaloridine, and four other antimicrobial agents. The type strain was uniformly susceptible to these 10 drugs. The strain GIFU 11925, a lactose-negative dissociant from strain GIFU 11926, was also susceptible to these drugs, with the sole exception of chloramphenicol (minimal inhibitory concentration, 100 micrograms/ml).",
"title": ""
},
{
"docid": "5cfc2b3a740d0434cf0b3c2812bd6e7a",
"text": "Well, someone can decide by themselves what they want to do and need to do but sometimes, that kind of person will need some a logical approach to discrete math references. People with open minded will always try to seek for the new things and information from many sources. On the contrary, people with closed mind will always think that they can do it by their principals. So, what kind of person are you?",
"title": ""
},
{
"docid": "5ff7a82ec704c8fb5c1aa975aec0507c",
"text": "With the increase of an ageing population and chronic diseases, society becomes more health conscious and patients become “health consumers” looking for better health management. People’s perception is shifting towards patient-centered, rather than the classical, hospital–centered health services which has been propelling the evolution of telemedicine research from the classic e-Health to m-Health and now is to ubiquitous healthcare (u-Health). It is expected that mobile & ubiquitous Telemedicine, integrated with Wireless Body Area Network (WBAN), have a great potential in fostering the provision of next-generation u-Health. Despite the recent efforts and achievements, current u-Health proposed solutions still suffer from shortcomings hampering their adoption today. This paper presents a comprehensive review of up-to-date requirements in hardware, communication, and computing for next-generation u-Health systems. It compares new technological and technical trends and discusses how they address expected u-Health requirements. A thorough survey on various worldwide recent system implementations is presented in an attempt to identify shortcomings in state-of-the art solutions. In particular, challenges in WBAN and ubiquitous computing were emphasized. The purpose of this survey is not only to help beginners with a holistic approach toward understanding u-Health systems but also present to researchers new technological trends and design challenges they have to cope with, while designing such systems.",
"title": ""
},
{
"docid": "cb561e56e60ba0e5eef2034158c544c2",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
},
{
"docid": "3fd551696803695056dd759d8f172779",
"text": "The aim of this research essay is to examine the structural nature of theory in Information Systems. Despite the impor tance of theory, questions relating to its form and structure are neglected in comparison with questions relating to episte mology. The essay addresses issues of causality, explanation, prediction, and generalization that underlie an understanding of theory. A taxonomy is proposed that classifies information systems theories with respect to the manner in which four central goals are addressed: analysis, explanation, predic tion, and prescription. Five interrelated types of theory are distinguished: (I) theory for analyzing, (2) theory for ex plaining, (3) theory for predicting, (4) theory for explaining and predicting, and (5) theory for design and action. Examples illustrate the nature of each theory type. The appli cability of the taxonomy is demonstrated by classifying a sample of journal articles. The paper contributes by showing that multiple views of theory exist and by exposing the assumptions underlying different viewpoints. In addition, it is suggested that the type of theory under development can influence the choice of an epistemological approach. Support Allen Lee was the accepting senior editor for this paper. M. Lynne Markus, Michael D. Myers, and Robert W. Zmud served as reviewers. is given for the legitimacy and value of each theory type. The building of integrated bodies of theory that encompass all theory types is advocated.",
"title": ""
},
{
"docid": "9f34152d5dd13619d889b9f6e3dfd5c3",
"text": "Nichols, M. (2003). A theory for eLearning. Educational Technology & Society, 6(2), 1-10, Available at http://ifets.ieee.org/periodical/6-2/1.html ISSN 1436-4522. © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from the editors at [email protected]. A theory for eLearning",
"title": ""
}
] | scidocsrr |
40c24a69387dd3269018b94f2ee88032 | University of Mannheim @ CLSciSumm-17: Citation-Based Summarization of Scientific Articles Using Semantic Textual Similarity | [
{
"docid": "16de36d6bf6db7c294287355a44d0f61",
"text": "The Computational Linguistics (CL) Summarization Pilot Task was created to encourage a community effort to address the research problem of summarizing research articles as “faceted summaries” in the domain of computational linguistics. In this pilot stage, a handannotated set of citing papers was provided for ten reference papers to help in automating the citation span and discourse facet identification problems. This paper details the corpus construction efforts by the organizers and the participating teams, who also participated in the task-based evaluation. The annotated development corpus used for this pilot task is publicly available at: https://github.com/WING-",
"title": ""
},
{
"docid": "ce2ef27f032d30ce2bc6aa5509a58e49",
"text": "Bibliometric measures are commonly used to estimate the popularity and the impact of published research. Existing bibliometric measures provide “quantitative” indicators of how good a published paper is. This does not necessarily reflect the “quality” of the work presented in the paper. For example, when hindex is computed for a researcher, all incoming citations are treated equally, ignoring the fact that some of these citations might be negative. In this paper, we propose using NLP to add a “qualitative” aspect to biblometrics. We analyze the text that accompanies citations in scientific articles (which we term citation context). We propose supervised methods for identifying citation text and analyzing it to determine the purpose (i.e. author intention) and the polarity (i.e. author sentiment) of citation.",
"title": ""
}
] | [
{
"docid": "4dcdb2520ec5f9fc9c32f2cbb343808c",
"text": "Shannon’s mathematical theory of communication defines fundamental limits on how much information can be transmitted between the different components of any man-made or biological system. This paper is an informal but rigorous introduction to the main ideas implicit in Shannon’s theory. An annotated reading list is provided for further reading.",
"title": ""
},
{
"docid": "6dbf49c714f6e176273317d4274b93de",
"text": "Categorical compositional distributional model of [9] sug gests a way to combine grammatical composition of the formal, type logi cal models with the corpus based, empirical word representations of distribut ional semantics. This paper contributes to the project by expanding the model to al so capture entailment relations. This is achieved by extending the representatio s of words from points in meaning space to density operators, which are probabilit y d stributions on the subspaces of the space. A symmetric measure of similarity an d an asymmetric measure of entailment is defined, where lexical entailment i s measured using von Neumann entropy, the quantum variant of Kullback-Leibl er divergence. Lexical entailment, combined with the composition map on wo rd representations, provides a method to obtain entailment relations on the leve l of sentences. Truth theoretic and corpus-based examples are provided.",
"title": ""
},
{
"docid": "d350335bab7278f5c8c0d9ceb0e6b50b",
"text": "New remote sensing sensors now acquire high spatial and spectral Satellite Image Time Series (SITS) of the world. These series of images are a key component of classification systems that aim at obtaining up-to-date and accurate land cover maps of the Earth’s surfaces. More specifically, the combination of the temporal, spectral and spatial resolutions of new SITS makes possible to monitor vegetation dynamics. Although traditional classification algorithms, such as Random Forest (RF), have been successfully applied for SITS classification, these algorithms do not make the most of the temporal domain. Conversely, some approaches that take into account the temporal dimension have recently been tested, especially Recurrent Neural Networks (RNNs). This paper proposes an exhaustive study of another deep learning approaches, namely Temporal Convolutional Neural Networks (TempCNNs) where convolutions are applied in the temporal dimension. The goal is to quantitatively and qualitatively evaluate the contribution of TempCNNs for SITS classification. This paper proposes a set of experiments performed on one million time series extracted from 46 Formosat-2 images. The experimental results show that TempCNNs are more accurate than RF and RNNs, that are the current state of the art for SITS classification. We also highlight some differences with results obtained in computer vision, e.g. about pooling layers. Moreover, we provide some general guidelines on the network architecture, common regularization mechanisms, and hyper-parameter values such as batch size. Finally, we assess the visual quality of the land cover maps produced by TempCNNs.",
"title": ""
},
{
"docid": "4db9cf56991edae0f5ca34546a8052c4",
"text": "This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finite-support ones are the square pulse (nearest-neighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinite-support interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty. I. I NTRODUCTION Interpolation is a technique that pervades many an application. Interpolation is almost never the goal in itself, yet it affects both the desired results and the ways to obtain them. Notwithstanding its nearly universal relevance, some authors give it less importance than it deserves, perhaps because considerations on interpolation are felt as being paltry when compared to the description of a more inspiring grand scheme of things of some algorithm or method. Due to this indifference, it appears as if the basic principles that underlie interpolation might be sometimes cast aside, or even misunderstood. The goal of this chapter is to refresh the notions encountered in classical interpolation, as well as to introduce the reader to more general approaches. 1.1. Definition What is interpolation? Several answers coexist. One of them defines interpolation as an informed estimate of the unknown [1]. We prefer the following—admittedly less concise—definition: modelbased recovery of continuous data from discrete data within a known range of abscissa. The reason for this preference is to allow for a clearer distinction between interpolation and extrapolation. The former postulates the existence of a known range where the model applies, and asserts that the deterministicallyrecovered continuous data is entirely described by the discrete data, while the latter authorizes the use of the model outside of the known range, with the implicit assumption that the model is \"good\" near data samples, and possibly less good elsewhere. Finally, the three most important hypothesis for interpolation are:",
"title": ""
},
{
"docid": "25bd9169c68ff39ee3a7edbdb65f1aa2",
"text": "Social networks such as Twitter and Facebook are important and widely used communication environments that exhibit scale, complexity, node interaction, and emergent behavior. In this paper, we analyze emergent behavior in Twitter and propose a definition of emergent behavior focused on the pervasiveness of a topic within a community. We extend an existing stochastic model for user behavior, focusing on advocate-follower relationships. The new user posting model includes retweets, replies, and mentions as user responses. To capture emergence, we propose a RPBS (Rising, Plateau, Burst and Stabilization) topic pervasiveness model with a new metric that captures how frequent and in what form the community is talking about a particular topic. Our initial validation compares our model with four Twitter datasets. Our extensive experimental analysis allows us to explore several “what-if” scenarios with respect to topic and knowledge sharing, showing how a pervasive topic evolves given various popularity scenarios.",
"title": ""
},
{
"docid": "e9f9d022007833ab7ae928619641e1b1",
"text": "BACKGROUND\nDissemination and implementation of health care interventions are currently hampered by the variable quality of reporting of implementation research. Reporting of other study types has been improved by the introduction of reporting standards (e.g. CONSORT). We are therefore developing guidelines for reporting implementation studies (StaRI).\n\n\nMETHODS\nUsing established methodology for developing health research reporting guidelines, we systematically reviewed the literature to generate items for a checklist of reporting standards. We then recruited an international, multidisciplinary panel for an e-Delphi consensus-building exercise which comprised an initial open round to revise/suggest a list of potential items for scoring in the subsequent two scoring rounds (scale 1 to 9). Consensus was defined a priori as 80% agreement with the priority scores of 7, 8, or 9.\n\n\nRESULTS\nWe identified eight papers from the literature review from which we derived 36 potential items. We recruited 23 experts to the e-Delphi panel. Open round comments resulted in revisions, and 47 items went forward to the scoring rounds. Thirty-five items achieved consensus: 19 achieved 100% agreement. Prioritised items addressed the need to: provide an evidence-based justification for implementation; describe the setting, professional/service requirements, eligible population and intervention in detail; measure process and clinical outcomes at population level (using routine data); report impact on health care resources; describe local adaptations to the implementation strategy and describe barriers/facilitators. Over-arching themes from the free-text comments included balancing the need for detailed descriptions of interventions with publishing constraints, addressing the dual aims of reporting on the process of implementation and effectiveness of the intervention and monitoring fidelity to an intervention whilst encouraging adaptation to suit diverse local contexts.\n\n\nCONCLUSIONS\nWe have identified priority items for reporting implementation studies and key issues for further discussion. An international, multidisciplinary workshop, where participants will debate the issues raised, clarify specific items and develop StaRI standards that fit within the suite of EQUATOR reporting guidelines, is planned.\n\n\nREGISTRATION\nThe protocol is registered with Equator: http://www.equator-network.org/library/reporting-guidelines-under-development/#17 .",
"title": ""
},
{
"docid": "e2cf52f0625af866c8842fb3d5c49d04",
"text": "Human immunodeficiency virus type 1 (HIV-1) can infect nondividing cells via passing through the nuclear pore complex. The nuclear membrane-imbedded protein SUN2 was recently reported to be involved in the nuclear import of HIV-1. Whether SUN1, which shares many functional similarities with SUN2, is involved in this process remained to be explored. Here we report that overexpression of SUN1 specifically inhibited infection by HIV-1 but not that by simian immunodeficiency virus (SIV) or murine leukemia virus (MLV). Overexpression of SUN1 did not affect reverse transcription but led to reduced accumulation of the 2-long-terminal-repeat (2-LTR) circular DNA and integrated viral DNA, suggesting a block in the process of nuclear import. HIV-1 CA was mapped as a determinant for viral sensitivity to SUN1. Treatment of SUN1-expressing cells with cyclosporine (CsA) significantly reduced the sensitivity of the virus to SUN1, and an HIV-1 mutant containing CA-G89A, which does not interact with cyclophilin A (CypA), was resistant to SUN1 overexpression. Downregulation of endogenous SUN1 inhibited the nuclear entry of the wild-type virus but not that of the G89A mutant. These results indicate that SUN1 participates in the HIV-1 nuclear entry process in a manner dependent on the interaction of CA with CypA.IMPORTANCE HIV-1 infects both dividing and nondividing cells. The viral preintegration complex (PIC) can enter the nucleus through the nuclear pore complex. It has been well known that the viral protein CA plays an important role in determining the pathways by which the PIC enters the nucleus. In addition, the interaction between CA and the cellular protein CypA has been reported to be important in the selection of nuclear entry pathways, though the underlying mechanisms are not very clear. Here we show that both SUN1 overexpression and downregulation inhibited HIV-1 nuclear entry. CA played an important role in determining the sensitivity of the virus to SUN1: the regulatory activity of SUN1 toward HIV-1 relied on the interaction between CA and CypA. These results help to explain how SUN1 is involved in the HIV-1 nuclear entry process.",
"title": ""
},
{
"docid": "345e46da9fc01a100f10165e82d9ca65",
"text": "We present a new theoretical framework for analyzing and learning artificial neural networks. Our approach simultaneously and adaptively learns both the structure of the network as well as its weights. The methodology is based upon and accompanied by strong data-dependent theoretical learning guarantees, so that the final network architecture provably adapts to the complexity of any given problem.",
"title": ""
},
{
"docid": "fceb43462f77cf858ef9747c1c5f0728",
"text": "MapReduce has become a dominant parallel computing paradigm for big data, i.e., colossal datasets at the scale of tera-bytes or higher. Ideally, a MapReduce system should achieve a high degree of load balancing among the participating machines, and minimize the space usage, CPU and I/O time, and network transfer at each machine. Although these principles have guided the development of MapReduce algorithms, limited emphasis has been placed on enforcing serious constraints on the aforementioned metrics simultaneously. This paper presents the notion of minimal algorithm, that is, an algorithm that guarantees the best parallelization in multiple aspects at the same time, up to a small constant factor. We show the existence of elegant minimal algorithms for a set of fundamental database problems, and demonstrate their excellent performance with extensive experiments.",
"title": ""
},
{
"docid": "3bf37b20679ca6abd022571e3356e95d",
"text": "OBJECTIVE\nOur goal is to create an ontology that will allow data integration and reasoning with subject data to classify subjects, and based on this classification, to infer new knowledge on Autism Spectrum Disorder (ASD) and related neurodevelopmental disorders (NDD). We take a first step toward this goal by extending an existing autism ontology to allow automatic inference of ASD phenotypes and Diagnostic & Statistical Manual of Mental Disorders (DSM) criteria based on subjects' Autism Diagnostic Interview-Revised (ADI-R) assessment data.\n\n\nMATERIALS AND METHODS\nKnowledge regarding diagnostic instruments, ASD phenotypes and risk factors was added to augment an existing autism ontology via Ontology Web Language class definitions and semantic web rules. We developed a custom Protégé plugin for enumerating combinatorial OWL axioms to support the many-to-many relations of ADI-R items to diagnostic categories in the DSM. We utilized a reasoner to infer whether 2642 subjects, whose data was obtained from the Simons Foundation Autism Research Initiative, meet DSM-IV-TR (DSM-IV) and DSM-5 diagnostic criteria based on their ADI-R data.\n\n\nRESULTS\nWe extended the ontology by adding 443 classes and 632 rules that represent phenotypes, along with their synonyms, environmental risk factors, and frequency of comorbidities. Applying the rules on the data set showed that the method produced accurate results: the true positive and true negative rates for inferring autistic disorder diagnosis according to DSM-IV criteria were 1 and 0.065, respectively; the true positive rate for inferring ASD based on DSM-5 criteria was 0.94.\n\n\nDISCUSSION\nThe ontology allows automatic inference of subjects' disease phenotypes and diagnosis with high accuracy.\n\n\nCONCLUSION\nThe ontology may benefit future studies by serving as a knowledge base for ASD. In addition, by adding knowledge of related NDDs, commonalities and differences in manifestations and risk factors could be automatically inferred, contributing to the understanding of ASD pathophysiology.",
"title": ""
},
{
"docid": "7e264804d56cab24454c59fe73b51884",
"text": "General Douglas MacArthur remarked that \"old soldiers never die; they just fade away.\" For decades, researchers have concluded that visual working memories, like old soldiers, fade away gradually, becoming progressively less precise as they are retained for longer periods of time. However, these conclusions were based on threshold-estimation procedures in which the complete termination of a memory could artifactually produce the appearance of lower precision. Here, we use a recall-based visual working memory paradigm that provides separate measures of the probability that a memory is available and the precision of the memory when it is available. Using this paradigm, we demonstrate that visual working memory representations may be retained for several seconds with little or no loss of precision, but that they may terminate suddenly and completely during this period.",
"title": ""
},
{
"docid": "d19503f965e637089d9fa200329f1349",
"text": "Almost a half century ago, regular endurance exercise was shown to improve the capacity of skeletal muscle to oxidize substrates to produce ATP for muscle work. Since then, adaptations in skeletal muscle mRNA level were shown to happen with a single bout of exercise. Protein changes occur within days if daily endurance exercise continues. Some of the mRNA and protein changes cause increases in mitochondrial concentrations. One mitochondrial adaptation that occurs is an increase in fatty acid oxidation at a given absolute, submaximal workload. Mechanisms have been described as to how endurance training increases mitochondria. Importantly, Pgc-1α is a master regulator of mitochondrial biogenesis by increasing many mitochondrial proteins. However, not all adaptations to endurance training are associated with increased mitochondrial concentrations. Recent evidence suggests that the energetic demands of muscle contraction are by themselves stronger controllers of body weight and glucose control than is muscle mitochondrial content. Endurance exercise has also been shown to regulate the processes of mitochondrial fusion and fission. Mitophagy removes damaged mitochondria, a process that maintains mitochondrial quality. Skeletal muscle fibers are composed of different phenotypes, which are based on concentrations of mitochondria and various myosin heavy chain protein isoforms. Endurance training at physiological levels increases type IIa fiber type with increased mitochondria and type IIa myosin heavy chain. Endurance training also improves capacity of skeletal muscle blood flow. Endurance athletes possess enlarged arteries, which may also exhibit decreased wall thickness. VEGF is required for endurance training-induced increases in capillary-muscle fiber ratio and capillary density.",
"title": ""
},
{
"docid": "58b957db2e72d76e5ee1fc5102df7dc1",
"text": "This paper presents a novel inverse kinematics solution for robotic arm based on artificial neural network (ANN) architecture. The motion of robotic arm is controlled by the kinematics of ANN. A new artificial neural network approach for inverse kinematics is proposed. The novelty of the proposed ANN is the inclusion of the feedback of current joint angles configuration of robotic arm as well as the desired position and orientation in the input pattern of neural network, while the traditional ANN has only the desired position and orientation of the end effector in the input pattern of neural network. In this paper, a six DOF Denso robotic arm with a gripper is controlled by ANN. The comprehensive experimental results proved the applicability and the efficiency of the proposed approach in robotic motion control. The inclusion of current configuration of joint angles in ANN significantly increased the accuracy of ANN estimation of the joint angles output. The new controller design has advantages over the existing techniques for minimizing the position error in unconventional tasks and increasing the accuracy of ANN in estimation of robot's joint angles.",
"title": ""
},
{
"docid": "ba966c2fc67b88d26a3030763d56ed1a",
"text": "Design of a long read-range, reconfigurable operating frequency radio frequency identification (RFID) metal tag is proposed in this paper. The antenna structure consists of two nonconnected load bars and two bowtie patches electrically connected through four pairs of vias to a conducting backplane to form a looped-bowtie RFID tag antenna that is suitable for mounting on metallic objects. The design offers more degrees of freedom to tune the input impedance of the proposed antenna. The load bars, which have a cutoff point on each bar, can be used to reconfigure the operating frequency of the tag by exciting any one of the three possible frequency modes; hence, this tag can be used worldwide for the UHF RFID frequency band. Experimental tests show that the maximum read range of the prototype, placed on a metallic object, are found to be 3.0, 3.2, and 3.3 m, respectively, for the three operating modes, which has been tested for an RFID reader with only 0.4 W error interrupt pending register (EIPR). The paper shows that the simulated and measured results are in good agreement with each other.",
"title": ""
},
{
"docid": "84963fdc37a3beb8eebc8d5626b53428",
"text": "A fundamental assumption in software security is that memory contents do not change unless there is a legitimate deliberate modification. Classical fault attacks show that this assumption does not hold if the attacker has physical access. Rowhammer attacks showed that local code execution is already sufficient to break this assumption. Rowhammer exploits parasitic effects in DRAM tomodify the content of a memory cell without accessing it. Instead, other memory locations are accessed at a high frequency. All Rowhammer attacks so far were local attacks, running either in a scripted language or native code. In this paper, we present Nethammer. Nethammer is the first truly remote Rowhammer attack, without a single attacker-controlled line of code on the targeted system. Systems that use uncached memory or flush instructions while handling network requests, e.g., for interaction with the network device, can be attacked using Nethammer. Other systems can still be attacked if they are protected with quality-of-service techniques like Intel CAT. We demonstrate that the frequency of the cache misses is in all three cases high enough to induce bit flips. We evaluated different bit flip scenarios. Depending on the location, the bit flip compromises either the security and integrity of the system and the data of its users, or it can leave persistent damage on the system, i.e., persistent denial of service. We investigated Nethammer on personal computers, servers, and mobile phones. Nethammer is a security landslide, making the formerly local attack a remote attack. With this work we invalidate all defenses and mitigation strategies against Rowhammer build upon the assumption of a local attacker. Consequently, this paradigm shift impacts the security of millions of devices where the attacker is not able to execute attacker-controlled code. Nethammer requires threat models to be re-evaluated for most network-connected systems. We discuss state-of-the-art countermeasures and show that most of them have no effect on our attack, including the targetrow-refresh (TRR) countermeasure of modern hardware. Disclaimer: This work on Rowhammer attacks over the network was conducted independently and unaware of other research groups working on truly remote Rowhammer attacks. Experiments and observations presented in this paper, predate the publication of the Throwhammer attack by Tatar et al. [81]. We will thoroughly study the differences between both papers and compare the advantages and disadvantages in a future version of this paper.",
"title": ""
},
{
"docid": "7d7c596d334153f11098d9562753a1ee",
"text": "The design of systems for intelligent control of urban traffic is important in providing a safe environment for pedestrians and motorists. Artificial neural networks (ANNs) (learning systems) and expert systems (knowledge-based systems) have been extensively explored as approaches for decision making. While the ANNs compute decisions by learning from successfully solved examples, the expert systems rely on a knowledge base developed by human reasoning for decision making. It is possible to integrate the learning abilities of an ANN and the knowledge-based decision-making ability of the expert system. This paper presents a real-time intelligent decision making system, IDUTC, for urban traffic control applications. The system integrates a backpropagation-based ANN that can learn and adapt to the dynamically changing environment and a fuzzy expert system for decision making. The performance of the proposed intelligent decision-making system is evaluated by mapping the the adaptable traffic light control problem. The application is implemented using the ANN approach, the FES approach, and the proposed integrated system approach. The results of extensive simulations using the three approaches indicate that the integrated system provides better performance and leads to a more efficient implementation than the other two approaches.",
"title": ""
},
{
"docid": "8914e1a38db6b47f4705f0c684350d38",
"text": "Style transfer is the task of rephrasing the text to contain specific stylistic properties without changing the intent or affect within the context. This paper introduces a new method for automatic style transfer. We first learn a latent representation of the input sentence which is grounded in a language translation model in order to better preserve the meaning of the sentence while reducing stylistic properties. Then adversarial generation techniques are used to make the output match the desired style. We evaluate this technique on three different style transformations: sentiment, gender and political slant. Compared to two state-of-the-art style transfer modeling techniques we show improvements both in automatic evaluation of style transfer and in manual evaluation of meaning preservation and fluency.",
"title": ""
},
{
"docid": "62d63357923c5a7b1ea21b8448e3cba3",
"text": "This paper presents a monocular and purely vision based pedestrian trajectory tracking and prediction framework with integrated map-based hazard inference. In Advanced Driver Assistance systems research, a lot of effort has been put into pedestrian detection over the last decade, and several pedestrian detection systems are indeed showing impressive results. Considerably less effort has been put into processing the detections further. We present a tracking system for pedestrians, which based on detection bounding boxes tracks pedestrians and is able to predict their positions in the near future. The tracking system is combined with a module which, based on the car's GPS position acquires a map and uses the road information in the map to know where the car can drive. Then the system warns the driver about pedestrians at risk, by combining the information about hazardous areas for pedestrians with a probabilistic position prediction for all observed pedestrians.",
"title": ""
},
{
"docid": "21822a9c37a315e6282200fe605debfe",
"text": "This paper provides a survey on speech recognition and discusses the techniques and system that enables computers to accept speech as input. This paper shows the major developments in the field of speech recognition. This paper highlights the speech recognition techniques and provides a brief description about the four stages in which the speech recognition techniques are classified. In addition, this paper gives a description of four feature extraction techniques: Linear Predictive Coding (LPC), Mel-frequency cepstrum (MFFCs), RASTA filtering and Probabilistic Linear Discriminate Analysis (PLDA). The objective of this paper is to summarize the feature extraction techniques used in speech recognition system.",
"title": ""
},
{
"docid": "732fd5463462d11451d78d97dc821d78",
"text": "Since sensors have limited range and coverage, mobile robots often have to make decisions on where to point their sensors. A good sensing strategy allows a robot to collect information that is useful for its tasks. Most existing solutions to this active sensing problem choose the direction that maximally reduces the uncertainty in a single state variable. In more complex problem domains, however, uncertainties exist in multiple state variables, and they affect the performance of the robot in different ways. The robot thus needs to have more sophisticated sensing strategies in order to decide which uncertainties to reduce, and to make the correct trade-offs. In this work, we apply a least squares reinforcement learning method to solve this problem. We implemented and tested the learning approach in the RoboCup domain, where the robot attempts to reach a ball and accurately kick it into the goal. We present experimental results that suggest our approach is able to learn highly effective sensing strategies.",
"title": ""
}
] | scidocsrr |
02a3b81a7117985ca5b91ab8868070a6 | Towards Neural Theorem Proving at Scale Anonymous | [
{
"docid": "4381ee2e578a640dda05e609ed7f6d53",
"text": "We introduce neural networks for end-to-end differentiable proving of queries to knowledge bases by operating on dense vector representations of symbols. These neural networks are constructed recursively by taking inspiration from the backward chaining algorithm as used in Prolog. Specifically, we replace symbolic unification with a differentiable computation on vector representations of symbols using a radial basis function kernel, thereby combining symbolic reasoning with learning subsymbolic vector representations. By using gradient descent, the resulting neural network can be trained to infer facts from a given incomplete knowledge base. It learns to (i) place representations of similar symbols in close proximity in a vector space, (ii) make use of such similarities to prove queries, (iii) induce logical rules, and (iv) use provided and induced logical rules for multi-hop reasoning. We demonstrate that this architecture outperforms ComplEx, a state-of-the-art neural link prediction model, on three out of four benchmark knowledge bases while at the same time inducing interpretable function-free first-order logic rules.",
"title": ""
},
{
"docid": "98cc792a4fdc23819c877634489d7298",
"text": "This paper introduces a product quantization-based approach for approximate nearest neighbor search. The idea is to decompose the space into a Cartesian product of low-dimensional subspaces and to quantize each subspace separately. A vector is represented by a short code composed of its subspace quantization indices. The euclidean distance between two vectors can be efficiently estimated from their codes. An asymmetric version increases precision, as it computes the approximate distance between a vector and a code. Experimental results show that our approach searches for nearest neighbors efficiently, in particular in combination with an inverted file system. Results for SIFT and GIST image descriptors show excellent search accuracy, outperforming three state-of-the-art approaches. The scalability of our approach is validated on a data set of two billion vectors.",
"title": ""
}
] | [
{
"docid": "9a63a5db2a40df78a436e7be87f42ff7",
"text": "A quantitative, coordinate-based meta-analysis combined data from 354 participants across 22 fMRI studies and one positron emission tomography (PET) study to identify the differences in neural correlates of figurative and literal language processing, and to investigate the role of the right hemisphere (RH) in figurative language processing. Studies that reported peak activations in standard space contrasting figurative vs. literal language processing at whole brain level in healthy adults were included. The left and right IFG, large parts of the left temporal lobe, the bilateral medial frontal gyri (medFG) and an area around the left amygdala emerged for figurative language processing across studies. Conditions requiring exclusively literal language processing did not activate any selective regions in most of the cases, but if so they activated the cuneus/precuneus, right MFG and the right IPL. No general RH advantage for metaphor processing could be found. On the contrary, significant clusters of activation for metaphor conditions were mostly lateralized to the left hemisphere (LH). Subgroup comparisons between experiments on metaphors, idioms, and irony/sarcasm revealed shared activations in left frontotemporal regions for idiom and metaphor processing. Irony/sarcasm processing was correlated with activations in midline structures such as the medFG, ACC and cuneus/precuneus. To test the graded salience hypothesis (GSH, Giora, 1997), novel metaphors were contrasted against conventional metaphors. In line with the GSH, RH involvement was found for novel metaphors only. Here we show that more analytic, semantic processes are involved in metaphor comprehension, whereas irony/sarcasm comprehension involves theory of mind processes.",
"title": ""
},
{
"docid": "57c705e710f99accab3d9242fddc5ac8",
"text": "Although much research has been conducted in the area of organizational commitment, few studies have explicitly examined how organizations facilitate commitment among members. Using a sample of 291 respondents from 45 firms, the results of this study show that rigorous recruitment and selection procedures and a strong, clear organizational value system are associated with higher levels of employee commitment based on internalization and identification. Strong organizational career and reward systems are related to higher levels of instrumental or compliance-based commitment.",
"title": ""
},
{
"docid": "f013f58d995693a79cd986a028faff38",
"text": "We present the design and implementation of a system for axiomatic programming, and its application to mathematical software construction. Key novelties include a direct support for user-defined axioms establishing local equalities between types, and overload resolution based on equational theories and user-defined local axioms. We illustrate uses of axioms, and their organization into concepts, in structured generic programming as practiced in computational mathematical systems.",
"title": ""
},
{
"docid": "f97d81a177ca629da5fe0d707aec4b8a",
"text": "This paper highlights the two machine learning approaches, viz. Rough Sets and Decision Trees (DT), for the prediction of Learning Disabilities (LD) in school-age children, with an emphasis on applications of data mining. Learning disability prediction is a very complicated task. By using these two approaches, we can easily and accurately predict LD in any child and also we can determine the best classification method. In this study, in rough sets the attribute reduction and classification are performed using Johnson’s reduction algorithm and Naive Bayes algorithm respectively for rule mining and in construction of decision trees, J48 algorithm is used. From this study, it is concluded that, the performance of decision trees are considerably poorer in several important aspects compared to rough sets. It is found that, for selection of attributes, rough sets is very useful especially in the case of inconsistent data and it also gives the information about the attribute correlation which is very important in the case of learning disability.",
"title": ""
},
{
"docid": "5d154a62b22415cbedd165002853315b",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "d6586a261e22e9044425cb27462c3435",
"text": "In this work, we develop a planner for high-speed navigation in unknown environments, for example reaching a goal in an unknown building in minimum time, or flying as fast as possible through a forest. This planning task is challenging because the distribution over possible maps, which is needed to estimate the feasibility and cost of trajectories, is unknown and extremely hard to model for real-world environments. At the same time, the worst-case assumptions that a receding-horizon planner might make about the unknown regions of the map may be overly conservative, and may limit performance. Therefore, robots must make accurate predictions about what will happen beyond the map frontiers to navigate as fast as possible. To reason about uncertainty in the map, we model this problem as a POMDP and discuss why it is so difficult given that we have no accurate probability distribution over real-world environments. We then present a novel method of predicting collision probabilities based on training data, which compensates for the missing environment distribution and provides an approximate solution to the POMDP. Extending our previous work, the principal result of this paper is that by using a Bayesian non-parametric learning algorithm that encodes formal safety constraints as a prior over collision probabilities, our planner seamlessly reverts to safe behavior when it encounters a novel environment for which it has no relevant training data. This strategy generalizes our method across all environment types, including those for which we have training data as well as those for which we do not. In familiar environment types with dense training data, we show an 80% speed improvement compared to a planner that is constrained to guarantee safety. In experiments, our planner has reached over 8 m/s in unknown cluttered indoor spaces. Video of our experimental demonstration is available at http://groups.csail.mit.edu/ rrg/bayesian_learning_high_speed_nav.",
"title": ""
},
{
"docid": "7b385edcbb0e3fa5bfffca2e1a9ecf13",
"text": "A goal of runtime software-fault monitoring is to observe software behavior to determine whether it complies with its intended behavior. Monitoring allows one to analyze and recover from detected faults, providing additional defense against catastrophic failure. Although runtime monitoring has been in use for over 30 years, there is renewed interest in its application to fault detection and recovery, largely because of the increasing complexity and ubiquitous nature of software systems. We present taxonomy that developers and researchers can use to analyze and differentiate recent developments in runtime software fault-monitoring approaches. The taxonomy categorizes the various runtime monitoring research by classifying the elements that are considered essential for building a monitoring system, i.e., the specification language used to define properties; the monitoring mechanism that oversees the program's execution; and the event handler that captures and communicates monitoring results. After describing the taxonomy, the paper presents the classification of the software-fault monitoring systems described in the literature.",
"title": ""
},
{
"docid": "5371c5b8e9db3334ed144be4354336cc",
"text": "E-learning is related to virtualised distance learning by means of electronic communication mechanisms, using its functionality as a support in the process of teaching-learning. When the learning process becomes computerised, educational data mining employs the information generated from the electronic sources to enrich the learning model for academic purposes. To provide support to e-learning systems, cloud computing is set as a natural platform, as it can be dynamically adapted by presenting a scalable system for the changing necessities of the computer resources over time. It also eases the implementation of data mining techniques to work in a distributed scenario, regarding the large databases generated from e-learning. We give an overview of the current state of the structure of cloud computing, and we provide details of the most common infrastructures that have been developed for such a system. We also present some examples of e-learning approaches for cloud computing, and finally, we discuss the suitability of this environment for educational data mining, suggesting the migration of this approach to this computational scenario.",
"title": ""
},
{
"docid": "768749e22e03aecb29385e39353dd445",
"text": "Query logs are of great interest for scientists and companies for research, statistical and commercial purposes. However, the availability of query logs for secondary uses raises privacy issues since they allow the identification and/or revelation of sensitive information about individual users. Hence, query anonymization is crucial to avoid identity disclosure. To enable the publication of privacy-preserved -but still usefulquery logs, in this paper, we present an anonymization method based on semantic microaggregation. Our proposal aims at minimizing the disclosure risk of anonymized query logs while retaining their semantics as much as possible. First, a method to map queries to their formal semantics extracted from the structured categories of the Open Directory Project is presented. Then, a microaggregation method is adapted to perform a semantically-grounded anonymization of query logs. To do so, appropriate semantic similarity and semantic aggregation functions are proposed. Experiments performed using real AOL query logs show that our proposal better retains the utility of anonymized query logs than other related works, while also minimizing the disclosure risk.",
"title": ""
},
{
"docid": "85605e6617a68dff216f242f31306eac",
"text": "Steered molecular dynamics (SMD) permits efficient investigations of molecular processes by focusing on selected degrees of freedom. We explain how one can, in the framework of SMD, employ Jarzynski's equality (also known as the nonequilibrium work relation) to calculate potentials of mean force (PMF). We outline the theory that serves this purpose and connects nonequilibrium processes (such as SMD simulations) with equilibrium properties (such as the PMF). We review the derivation of Jarzynski's equality, generalize it to isobaric--isothermal processes, and discuss its implications in relation to the second law of thermodynamics and computer simulations. In the relevant regime of steering by means of stiff springs, we demonstrate that the work on the system is Gaussian-distributed regardless of the speed of the process simulated. In this case, the cumulant expansion of Jarzynski's equality can be safely terminated at second order. We illustrate the PMF calculation method for an exemplary simulation and demonstrate the Gaussian nature of the resulting work distribution.",
"title": ""
},
{
"docid": "d509cb384ecddafa0c4f866882af2c77",
"text": "On 9 January 1857, a large earthquake of magnitude 7.9 occurred on the San Andreas fault, with rupture initiating at Parkfield in central California and propagating in a southeasterly direction over a distance of more than 360 km. Such a unilateral rupture produces significant directivity toward the San Fernando and Los Angeles basins. Indeed, newspaper reports of sloshing observed in the Los Angeles river point to long-duration (1–2 min) and long-period (2–8 sec) shaking. If such an earthquake were to happen today, it could impose significant seismic demand on present-day tall buildings. Using state-of-the-art computational tools in seismology and structural engineering, validated using data from the 17 January 1994, magnitude 6.7 Northridge earthquake, we determine the damage to an existing and a new 18story steel moment-frame building in southern California due to ground motion from two hypothetical magnitude 7.9 earthquakes on the San Andreas fault. Our study indicates that serious damage occurs in these buildings at many locations in the region in one of the two scenarios. For a north-to-south rupture scenario, the peak velocity is of the order of 1 m • sec 1 in the Los Angeles basin, including downtown Los Angeles, and 2 m • sec 1 in the San Fernando valley, while the peak displacements are of the order of 1 m and 2 m in the Los Angeles basin and San Fernando valley, respectively. For a south-to-north rupture scenario the peak velocities and displacements are reduced by a factor of roughly 2.",
"title": ""
},
{
"docid": "d529b4f1992f438bb3ce4373090f8540",
"text": "One conventional tool for interpolating surfaces over scattered data, the thin-plate spline, has an elegant algebra expressing the dependence of the physical bending energy of a thin metal plate on point constraints. For interpolation of a surface over a fixed set of nodes in the plane, the bending energy is a quadratic form in the heights assigned to the surface. The spline is the superposition of eigenvectors of the bending energy matrix, of successively larger physical scales, over a tilted flat plane having no bending energy at all. When these splines are paired, one representing the x-coordinate of another form and the other the y-coordinate, they aid greatly in the modeling of biological shape change as deformation. In this context, the pair becomes an interpolation map from RZ to R' relating two sets of landmark points. The spline maps decompose, in the same way as the spline surfaces, into a linear part (an affine transformation) together with the superposition of principal warps, which are geometrically independent, affine-free deformations of progressively smaller geometrical scales. The warps decompose an empirical deformation into orthogonal features more or less as a conventional orthogonal functional analysis decomposes the single scene. This paper demonstrates the decomposition of deformations by principal warps, extends the method to deal with curving edges between landmarks, relates this formalism to other applications of splines current in computer vision, and indicates how they might aid in the extraction of features for analysis, comparison, and diagnosis of biological and medical images.",
"title": ""
},
{
"docid": "aeaee20b184e346cd469204dcf49d815",
"text": "Naresh Kumari , Nitin Malik , A. N. Jha , Gaddam Mallesham #*4 # Department of Electrical, Electronics and Communication Engineering, The NorthCap University, Gurgaon, India 1 [email protected] 2 [email protected] * Ex-Professor, Electrical Engineering, Indian Institute of Technology, New Delhi, India 3 [email protected] #* Department of Electrical Engineering, Osmania University, Hyderabad, India 4 [email protected]",
"title": ""
},
{
"docid": "6ebce4adb3693070cac01614078d68fc",
"text": "The recent COCO object detection dataset presents several new challenges for object detection. In particular, it contains objects at a broad range of scales, less prototypical images, and requires more precise localization. To address these challenges, we test three modifications to the standard Fast R-CNN object detector: (1) skip connections that give the detector access to features at multiple network layers, (2) a foveal structure to exploit object context at multiple object resolutions, and (3) an integral loss function and corresponding network adjustment that improve localization. The result of these modifications is that information can flow along multiple paths in our network, including through features from multiple network layers and from multiple object views. We refer to our modified classifier as a ‘MultiPath’ network. We couple our MultiPath network with DeepMask object proposals, which are well suited for localization and small objects, and adapt our pipeline to predict segmentation masks in addition to bounding boxes. The combined system improves results over the baseline Fast R-CNN detector with Selective Search by 66% overall and by 4× on small objects. It placed second in both the COCO 2015 detection and segmentation challenges.",
"title": ""
},
{
"docid": "28e8bc5b0d1fa9fa46b19c8c821a625c",
"text": "This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.",
"title": ""
},
{
"docid": "645f320514b0fa5a8b122c4635bc3df6",
"text": "A critical decision problem for top management, and the focus of this study, is whether the CEO (chief executive officer) and CIO (chief information officer) should commit their time to formal planning with the expectation of producing an information technology (IT)-based competitive advantage. Using the perspective of the resource-based view, a model is presented that examines how strategic IT alignment can produce enhanced organizational strategies that yield competitive advantage. One hundred sixty-one CIOs provided data using a postal survey. Results supported seven of the eight hypotheses. They showed that information intensity is an important antecedent to strategic IT alignment, that strategic IT alignment is best explained by multiple constructs which operationalize both process and content measures, and that alignment between the IT plan and the business plan is significantly related to the use of IT for competitive advantage. Study results raise questions about the effect of CEO participation, which appears to be the weak link in the process, and also about the perception of the CIO on the importance of CEO involvement. The paper contributes to our understanding of how knowledge sharing in the alignment process contributes to the creation of superior organizational strategies, provides a framework of the alignment-performance relationship, and furnishes several new constructs. Subject Areas: Competitive Advantage, Information Systems Planning, Knowledge Sharing, Resource-Based View, Strategic Planning, and Structural Equation Modeling.",
"title": ""
},
{
"docid": "a85511bfaa47701350f4d97ec94453fd",
"text": "We propose a novel expression transfer method based on an analysis of the frequency of multi-expression facial images. We locate the facial features automatically and describe the shape deformations between a neutral expression and non-neutral expressions. The subtle expression changes are important visual clues to distinguish different expressions. These changes are more salient in the frequency domain than in the image domain. We extract the subtle local expression deformations for the source subject, coded in the wavelet decomposition. This information about expressions is transferred to a target subject. The resulting synthesized image preserves both the facial appearance of the target subject and the expression details of the source subject. This method is extended to dynamic expression transfer to allow a more precise interpretation of facial expressions. Experiments on Japanese Female Facial Expression (JAFFE), the extended Cohn-Kanade (CK+) and PIE facial expression databases show the superiority of our method over the state-of-the-art method.",
"title": ""
},
{
"docid": "bb0dce17b5810ebd7173ea35545c3bf6",
"text": "Five studies demonstrated that highly guilt-prone people may avoid forming interdependent partnerships with others whom they perceive to be more competent than themselves, as benefitting a partner less than the partner benefits one's self could trigger feelings of guilt. Highly guilt-prone people who lacked expertise in a domain were less willing than were those low in guilt proneness who lacked expertise in that domain to create outcome-interdependent relationships with people who possessed domain-specific expertise. These highly guilt-prone people were more likely than others both to opt to be paid on their performance alone (Studies 1, 3, 4, and 5) and to opt to be paid on the basis of the average of their performance and that of others whose competence was more similar to their own (Studies 2 and 5). Guilt proneness did not predict people's willingness to form outcome-interdependent relationships with potential partners who lacked domain-specific expertise (Studies 4 and 5). It also did not predict people's willingness to form relationships when poor individual performance would not negatively affect partner outcomes (Study 4). Guilt proneness therefore predicts whether, and with whom, people develop interdependent relationships. The findings also demonstrate that highly guilt-prone people sacrifice financial gain out of concern about how their actions would influence others' welfare. As such, the findings demonstrate a novel way in which guilt proneness limits free-riding and therefore reduces the incidence of potentially unethical behavior. Lastly, the findings demonstrate that people who lack competence may not always seek out competence in others when choosing partners.",
"title": ""
},
{
"docid": "a9a8baf6dfb2526d75b0d7e49bb9b138",
"text": "Many classification problems require decisions among a large number of competing classes. These tasks, however, are not handled well by general purpose learning methods and are usually addressed in an ad-hoc fashion. We suggest a general approach – a sequential learning model that utilizes classifiers to sequentially restrict the number of competing classes while maintaining, with high probability, the presence of the true outcome in the candidates set. Some theoretical and computational properties of the model are discussed and we argue that these are important in NLP-like domains. The advantages of the model are illustrated in an experiment in partof-speech tagging.",
"title": ""
},
{
"docid": "890236dc21eef6d0523ee1f5e91bf784",
"text": "Perhaps the most amazing property of these word embeddings is that somehow these vector encodings effectively capture the semantic meanings of the words. The question one might ask is how or why? The answer is that because the vectors adhere surprisingly well to our intuition. For instance, words that we know to be synonyms tend to have similar vectors in terms of cosine similarity and antonyms tend to have dissimilar vectors. Even more surprisingly, word vectors tend to obey the laws of analogy. For example, consider the analogy ”Woman is to queen as man is to king”. It turns out that",
"title": ""
}
] | scidocsrr |
79746946cd66c344af505c1977c9d15d | A 12-bit 20 MS/s 56.3 mW Pipelined ADC With Interpolation-Based Nonlinear Calibration | [
{
"docid": "96d0cfd6349e02a90528b40c5e3decc6",
"text": "A 16-bit 125 MS/s pipeline analog-to-digital converter (ADC) implemented in a 0.18 ¿m CMOS process is presented in this paper. A SHA-less 4-bit front-end is used to achieve low power and minimize the size of the input sampling capacitance in order to ease drivability. The ADC includes foreground factory digital calibration to correct for capacitor mismatches and dithering that can be optionally enabled to improve small-signal linearity. This ADC achieves an SNR of 78.7 dB, an SNDR of 78.6 dB and an SFDR of 96 dB with a 30 MHz input signal, while maintaining an SNR > 76 dB and an SFDR > 85 dB up to 150 MHz input signals. Further, with dithering enabled the worst spur is <-98 dB for inputs below -4 dBFS at 100 MHz IF. The ADC consumes 385 mW from a 1.8 V supply.",
"title": ""
}
] | [
{
"docid": "4d396614420b24265d05b265b7ae6cd5",
"text": "The objective of this study was to characterise the antagonistic activity of cellular components of potential probiotic bacteria isolated from the gut of healthy rohu (Labeo rohita), a tropical freshwater fish, against the fish pathogen, Aeromonas hydrophila. Three potential probiotic strains (referred to as R1, R2, and R5) were screened using a well diffusion, and their antagonistic activity against A. hydrophila was determined. Biochemical tests and 16S rRNA gene analysis confirmed that R1, R2, and R5 were Lactobacillus plantarum VSG3, Pseudomonas aeruginosa VSG2, and Bacillus subtilis VSG1, respectively. Four different fractions of cellular components (i.e. the whole-cell product, heat-killed whole-cell product [HKWCP], intracellular product [ICP], and extracellular product) of these selected strains were effective in an in vitro sensitivity test against 6 A. hydrophila strains. Among the cellular components, the ICP of R1, HKWCP of R2, and ICP of R5 exhibited the strongest antagonistic activities, as evidenced by their inhibition zones. The antimicrobial compounds from these selected cellular components were partially purified by thin-layer and high-performance liquid chromatography, and their properties were analysed. The ranges of pH stability of the purified compounds were wide (3.0-10.0), and compounds were thermally stable up to 90 °C. Considering these results, isolated probiotic strains may find potential applications in the prevention and treatment of aquatic aeromonosis.",
"title": ""
},
{
"docid": "66c49b0dbdbdf29ace0f60839b867e43",
"text": "The job shop scheduling problem with the makespan criterion is a certain NP-hard case from OR theory having excellent practical applications. This problem, having been examined for years, is also regarded as an indicator of the quality of advanced scheduling algorithms. In this paper we provide a new approximate algorithm that is based on the big valley phenomenon, and uses some elements of so-called path relinking technique as well as new theoretical properties of neighbourhoods. The proposed algorithm owns, unprecedented up to now, accuracy, obtainable in a quick time on a PC, which has been confirmed after wide computer tests.",
"title": ""
},
{
"docid": "5fe43f0b23b0cfd82b414608e60db211",
"text": "The Distress Analysis Interview Corpus (DAIC) contains clinical interviews designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and post traumatic stress disorder. The interviews are conducted by humans, human controlled agents and autonomous agents, and the participants include both distressed and non-distressed individuals. Data collected include audio and video recordings and extensive questionnaire responses; parts of the corpus have been transcribed and annotated for a variety of verbal and non-verbal features. The corpus has been used to support the creation of an automated interviewer agent, and for research on the automatic identification of psychological distress.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "c69e002a71132641947d8e30bb2e74f7",
"text": "In this paper, we investigate a new stealthy attack simultaneously compromising actuators and sensors. This attack is referred to as coordinated attack. We show that the coordinated attack is capable of deriving the system states far away from the desired without being detected. Furthermore, designing such an attack practically does not require knowledge on target systems, which makes the attack much more dangerous compared to the other known attacks. Also, we present a method to detect the coordinated attack. To validate the effect of the proposed attack, we carry out experiments using a quadrotor.",
"title": ""
},
{
"docid": "023ad4427627e7bdb63ba5e15c3dff32",
"text": "Recent works have been shown effective in using neural networks for Chinese word segmentation. However, these models rely on large-scale data and are less effective for low-resource datasets because of insufficient training data. Thus, we propose a transfer learning method to improve low-resource word segmentation by leveraging high-resource corpora. First, we train a teacher model on high-resource corpora and then use the learned knowledge to initialize a student model. Second, a weighted data similarity method is proposed to train the student model on low-resource data with the help of highresource corpora. Finally, given that insufficient data puts forward higher requirements for feature extraction, we propose a novel neural network which improves feature learning. Experiment results show that our work significantly improves the performance on low-resource datasets: 2.3% and 1.5% F-score on PKU and CTB datasets. Furthermore, this paper achieves state-of-the-art results: 96.1%, and 96.2% F-score on PKU and CTB datasets1. Besides, we explore an asynchronous parallel method on neural word segmentation to speed up training. The parallel method accelerates training substantially and is almost five times faster than a serial mode.",
"title": ""
},
{
"docid": "e68fc0a0522f7cd22c7071896263a1f4",
"text": "OBJECTIVES\nThe aim of this study was to evaluate the costs of subsidized care for an adult population provided by private and public sector dentists.\n\n\nMETHODS\nA sample of 210 patients was drawn systematically from the waiting list for nonemergency dental treatment in the city of Turku. Questionnaire data covering sociodemographic background, dental care utilization and marginal time cost estimates were combined with data from patient registers on treatment given. Information was available on 104 patients (52 from each of the public and the private sectors).\n\n\nRESULTS\nThe overall time taken to provide treatment was 181 days in the public sector and 80 days in the private sector (P<0.002). On average, public sector patients had significantly (P < 0.01) more dental visits (5.33) than private sector patients (3.47), which caused higher visiting fees. In addition, patients in the public sector also had higher other out-of-pocket costs than in the private sector. Those who needed emergency dental treatment during the waiting time for comprehensive care had significantly more costly treatment and higher total costs than the other patients. Overall time required for dental visits significantly increased total costs. The total cost of dental care in the public sector was slightly higher (P<0.05) than in the private sector.\n\n\nCONCLUSIONS\nThere is no direct evidence of moral hazard on the provider side from this study. The observed cost differences between the two sectors may indicate that private practitioners could manage their publicly funded patients more quickly than their private paying patients. On the other hand, private dentists providing more treatment per visit could be explained by private dentists providing more than is needed by increasing the content per visit.",
"title": ""
},
{
"docid": "d956c805ee88d1b0ca33ce3f0f838441",
"text": "The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b22010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets.1",
"title": ""
},
{
"docid": "8b49149b3288b9565263b7c4d6978378",
"text": "This paper produces a baseline security analysis of the Cloud Computing Operational Environment in terms of threats, vulnerabilities and impacts. An analysis is conducted and the top three threats are identified with recommendations for practitioners. The conclusion of the analysis is that the most serious threats are non-technical and can be solved via management processes rather than technical countermeasures.",
"title": ""
},
{
"docid": "c27b61685ae43c7cd1b60ca33ab209df",
"text": "The establishment of damper settings that provide an optimal compromise between wobble- and weave-mode damping is discussed. The conventional steering damper is replaced with a network of interconnected mechanical components comprised of springs, dampers and inerters - that retain the virtue of the damper, while improving the weave-mode performance. The improved performance is due to the fact that the network introduces phase compensation between the relative angular velocity of the steering system and the resulting steering technique",
"title": ""
},
{
"docid": "7f848facaa535d53e7a6fe7aa2435473",
"text": "The data structure used to represent image information can be critical to the successful completion of an image processing task. One structure that has attracted considerable attention is the image pyramid This consists of a set of lowpass or bandpass copies of an image, each representing pattern information of a different scale. Here we describe a variety of pyramid methods that we have developed for image data compression, enhancement, analysis and graphics. ©1984 RCA Corporation Final manuscript received November 12, 1984 Reprint Re-29-6-5 that can perform most of the routine visual tasks that humans do effortlessly. It is becoming increasingly clear that the format used to represent image data can be as critical in image processing as the algorithms applied to the data. A digital image is initially encoded as an array of pixel intensities, but this raw format is not suited to most asks. Alternatively, an image may be represented by its Fourier transform, with operations applied to the transform coefficients rather than to the original pixel values. This is appropriate for some data compression and image enhancement tasks, but inappropriate for others. The transform representation is particularly unsuited for machine vision and computer graphics, where the spatial location of pattem elements is critical. Recently there has been a great deal of interest in representations that retain spatial localization as well as localization in the spatial—frequency domain. This is achieved by decomposing the image into a set of spatial frequency bandpass component images. Individual samples of a component image represent image pattern information that is appropriately localized, while the bandpassed image as a whole represents information about a particular fineness of detail or scale. There is evidence that the human visual system uses such a representation, 1 and multiresolution schemes are becoming increasingly popular in machine vision and in image processing in general. The importance of analyzing images at many scales arises from the nature of images themselves. Scenes in the world contain objects of many sizes, and these objects contain features of many sizes. Moreover, objects can be at various distances from the viewer. As a result, any analysis procedure that is applied only at a single scale may miss information at other scales. The solution is to carry out analyses at all scales simultaneously. Convolution is the basic operation of most image analysis systems, and convolution with large weighting functions is a notoriously expensive computation. In a multiresolution system one wishes to perform convolutions with kernels of many sizes, ranging from very small to very large. and the computational problems appear forbidding. Therefore one of the main problems in working with multiresolution representations is to develop fast and efficient techniques. Members of the Advanced Image Processing Research Group have been actively involved in the development of multiresolution techniques for some time. Most of the work revolves around a representation known as a \"pyramid,\" which is versatile, convenient, and efficient to use. We have applied pyramid-based methods to some fundamental problems in image analysis, data compression, and image manipulation.",
"title": ""
},
{
"docid": "dc418c7add2456b08bc3a6f15b31da9f",
"text": "In professional search environments, such as patent search or legal search, search tasks have unique characteristics: 1) users interactively issue several queries for a topic, and 2) users are willing to examine many retrieval results, i.e., there is typically an emphasis on recall. Recent surveys have also verified that professional searchers continue to have a strong preference for Boolean queries because they provide a record of what documents were searched. To support this type of professional search, we propose a novel Boolean query suggestion technique. Specifically, we generate Boolean queries by exploiting decision trees learned from pseudo-labeled documents and rank the suggested queries using query quality predictors. We evaluate our algorithm in simulated patent and medical search environments. Compared with a recent effective query generation system, we demonstrate that our technique is effective and general.",
"title": ""
},
{
"docid": "633d32667221f53def4558db23a8b8af",
"text": "In this paper we present, ARCTREES, a novel way of visualizing hierarchical and non-hierarchical relations within one interactive visualization. Such a visualization is challenging because it must display hierarchical information in a way that the user can keep his or her mental map of the data set and include relational information without causing misinterpretation. We propose a hierarchical view derived from traditional Treemaps and augment this view with an arc diagram to depict relations. In addition, we present interaction methods that allow the exploration of the data set using Focus+Context techniques for navigation. The development was motivated by a need for understanding relations in structured documents but it is also useful in many other application domains such as project management and calendars.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "c10a83c838f59adeb50608d5b96c0fbc",
"text": "Robots are typically equipped with multiple complementary sensors such as cameras and laser range finders. Camera generally provides dense 2D information while range sensors give sparse and accurate depth information in the form of a set of 3D points. In order to represent the different data sources in a common coordinate system, extrinsic calibration is needed. This paper presents a pipeline for extrinsic calibration a zed setero camera with Velodyne LiDAR puck using a novel self-made 3D marker whose edges can be robustly detected in the image and 3d point cloud. Our approach first estimate the large sensor displacement using just a single frame. then we optimize the coarse results by finding the best align of edges in order to obtain a more accurate calibration. Finally, the ratio of the 3D points correctly projected onto proper image segments is used to evaluate the accuracy of calibration.",
"title": ""
},
{
"docid": "eda3987f781263615ccf53dd9a7d1a27",
"text": "The study gives a synopsis over condition monitoring methods both as a diagnostic tool and as a technique for failure identification in high voltage induction motors in industry. New running experience data for 483 motor units with 6135 unit years are registered and processed statistically, to reveal the connection between motor data, protection and condition monitoring methods, maintenance philosophy and different types of failures. The different types of failures are further analyzed to failure-initiators, -contributors and -underlying causes. The results have been compared with those of a previous survey, IEEE Report of Large Motor Reliability Survey of Industrial and Commercial Installations, 1985. In the present survey the motors are in the range of 100 to 1300 kW, 47% of them between 100 and 500 kW.",
"title": ""
},
{
"docid": "f36348f2909a9642c18590fca6c9b046",
"text": "This study explores the use of data mining methods to detect fraud for on e-ledgers through financial statements. For this purpose, data set were produced by rule-based control application using 72 sample e-ledger and error percentages were calculated and labeled. The financial statements created from the labeled e-ledgers were trained by different data mining methods on 9 distinguishing features. In the training process, Linear Regression, Artificial Neural Networks, K-Nearest Neighbor algorithm, Support Vector Machine, Decision Stump, M5P Tree, J48 Tree, Random Forest and Decision Table were used. The results obtained are compared and interpreted.",
"title": ""
},
{
"docid": "7c11bd23338b6261f44319198fcdc082",
"text": "Zooplankton are quite significant to the ocean ecosystem for stabilizing balance of the ecosystem and keeping the earth running normally. Considering the significance of zooplantkon, research about zooplankton has caught more and more attentions. And zooplankton recognition has shown great potential for science studies and mearsuring applications. However, manual recognition on zooplankton is labour-intensive and time-consuming, and requires professional knowledge and experiences, which can not scale to large-scale studies. Deep learning approach has achieved remarkable performance in a number of object recognition benchmarks, often achieveing the current best performance on detection or classification tasks and the method demonstrates very promising and plausible results in many applications. In this paper, we explore a deep learning architecture: ZooplanktoNet to classify zoolankton automatically and effectively. The deep network is characterized by capturing more general and representative features than previous predefined feature extraction algorithms in challenging classification. Also, we incorporate some data augmentation to aim at reducing the overfitting for lacking of zooplankton images. And we decide the zooplankton class according to the highest score in the final predictions of ZooplanktoNet. Experimental results demonstrate that ZooplanktoNet can solve the problem effectively with accuracy of 93.7% in zooplankton classification.",
"title": ""
},
{
"docid": "c86aad62e950d7c10f93699d421492d5",
"text": "Carotid intima-media thickness (CIMT) is a good surrogate for atherosclerosis. Hyperhomocysteinemia is an independent risk factor for cardiovascular diseases. We aim to investigate the relationships between homocysteine (Hcy) related biochemical indexes and CIMT, the associations between Hcy related SNPs and CIMT, as well as the potential gene–gene interactions. The present study recruited full siblings (186 eligible families with 424 individuals) with no history of cardiovascular events from a rural area of Beijing. We examined CIMT, intima-media thickness for common carotid artery (CCA-IMT) and carotid bifurcation, tested plasma levels for Hcy, vitamin B6 (VB6), vitamin B12 (VB12) and folic acid (FA), and genotyped 9 SNPs on MTHFR, MTR, MTRR, BHMT, SHMT1, CBS genes. Associations between SNPs and biochemical indexes and CIMT indexes were analyzed using family-based association test analysis. We used multi-level mixed-effects regression model to verify SNP-CIMT associations and to explore the potential gene–gene interactions. VB6, VB12 and FA were negatively correlated with CIMT indexes (p < 0.05). rs2851391 T allele was associated with decreased plasma VB12 levels (p = 0.036). In FABT, CBS rs2851391 was significantly associated with CCA-IMT (p = 0.021) and CIMT (p = 0.019). In multi-level mixed-effects regression model, CBS rs2851391 was positively significantly associated with CCA-IMT (Coef = 0.032, se = 0.009, raw p < 0.001) after Bonferoni correction (corrected α = 0.0056). Gene–gene interactions were found between CBS rs2851391 and BHMT rs10037045 for CCA-IMT (p = 0.011), as well as between CBS rs2851391 and MTR rs1805087 for CCA-IMT (p = 0.007) and CIMT (p = 0.022). Significant associations are found between Hcy metabolism related genetic polymorphisms, biochemical indexes and CIMT indexes. There are complex interactions between genetic polymorphisms for CCA-IMT and CIMT.",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
}
] | scidocsrr |
47049efc46eda3078c30357036fa2ddf | Multiple object identification with passive RFID tags | [
{
"docid": "1c7251c55cf0daea9891c8a522bbd3ec",
"text": "The role of computers in the modern office has divided ouractivities between virtual interactions in the realm of thecomputer and physical interactions with real objects within thetraditional office infrastructure. This paper extends previous workthat has attempted to bridge this gap, to connect physical objectswith virtual representations or computational functionality, viavarious types of tags. We discuss a variety of scenarios we haveimplemented using a novel combination of inexpensive, unobtrusiveand easy to use RFID tags, tag readers, portable computers andwireless networking. This novel combination demonstrates theutility of invisibly, seamlessly and portably linking physicalobjects to networked electronic services and actions that arenaturally associated with their form.",
"title": ""
},
{
"docid": "9c751a7f274827e3d8687ea520c6e9a9",
"text": "Radio frequency identification systems with passive tags are powerful tools for object identification. However, if multiple tags are to be identified simultaneously, messages from the tags can collide and cancel each other out. Therefore, multiple read cycles have to be performed in order to achieve a high recognition rate. For a typical stochastic anti-collision scheme, we show how to determine the optimal number of read cycles to perform under a given assurance level determining the acceptable rate of missed tags. This yields an efficient procedure for object identification. We also present results on the performance of an implementation.",
"title": ""
}
] | [
{
"docid": "2944000757568f330b495ba2a446b0a0",
"text": "In this paper, we propose Deep Alignment Network (DAN), a robust face alignment method based on a deep neural network architecture. DAN consists of multiple stages, where each stage improves the locations of the facial landmarks estimated by the previous stage. Our method uses entire face images at all stages, contrary to the recently proposed face alignment methods that rely on local patches. This is possible thanks to the use of landmark heatmaps which provide visual information about landmark locations estimated at the previous stages of the algorithm. The use of entire face images rather than patches allows DAN to handle face images with large variation in head pose and difficult initializations. An extensive evaluation on two publicly available datasets shows that DAN reduces the state-of-the-art failure rate by up to 70%. Our method has also been submitted for evaluation as part of the Menpo challenge.",
"title": ""
},
{
"docid": "2891ce3327617e9e957488ea21e9a20c",
"text": "Recently, remote healthcare systems have received increasing attention in the last decade, explaining why intelligent systems with physiology signal monitoring for e-health care are an emerging area of development. Therefore, this study adopts a system which includes continuous collection and evaluation of multiple vital signs, long-term healthcare, and a cellular connection to a medical center in emergency case and it transfers all acquired raw data by the internet in normal case. The proposed system can continuously acquire four different physiological signs, for example, ECG, SpO2, temperature, and blood pressure and further relayed them to an intelligent data analysis scheme to diagnose abnormal pulses for exploring potential chronic diseases. The proposed system also has a friendly web-based interface for medical staff to observe immediate pulse signals for remote treatment. Once abnormal event happened or the request to real-time display vital signs is confirmed, all physiological signs will be immediately transmitted to remote medical server through both cellular networks and internet. Also data can be transmitted to a family member's mobile phone or doctor's phone through GPRS. A prototype of such system has been successfully developed and implemented, which will offer high standard of healthcare with a major reduction in cost for our society.",
"title": ""
},
{
"docid": "457f10c4c5d5b748a4f35abd89feb519",
"text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.",
"title": ""
},
{
"docid": "144bb8e869671843cb5d8053e2ee861d",
"text": "We investigate whether physicians' financial incentives influence health care supply, technology diffusion, and resulting patient outcomes. In 1997, Medicare consolidated the geographic regions across which it adjusts physician payments, generating area-specific price shocks. Areas with higher payment shocks experience significant increases in health care supply. On average, a 2 percent increase in payment rates leads to a 3 percent increase in care provision. Elective procedures such as cataract surgery respond much more strongly than less discretionary services. Non-radiologists expand their provision of MRIs, suggesting effects on technology adoption. We estimate economically small health impacts, albeit with limited precision.",
"title": ""
},
{
"docid": "39a59eac80c6f4621971399dde2fbb7f",
"text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "47bae1df7bc512e8a458122892e145f8",
"text": "This paper presents an inertial-measurement-unit-based pen (IMUPEN) and its associated trajectory reconstruction algorithm for motion trajectory reconstruction and handwritten digit recognition applications. The IMUPEN is composed of a triaxial accelerometer, two gyroscopes, a microcontroller, and an RF wireless transmission module. Users can hold the IMUPEN to write numerals or draw simple symbols at normal speed. During writing or drawing movements, the inertial signals generated for the movements are transmitted to a computer via the wireless module. A trajectory reconstruction algorithm composed of the procedures of data collection, signal preprocessing, and trajectory reconstruction has been developed for reconstructing the trajectories of movements. In order to minimize the cumulative errors caused by the intrinsic noise/drift of sensors, we have developed an orientation error compensation method and a multiaxis dynamic switch. The advantages of the IMUPEN include the following: 1) It is portable and can be used anywhere without any external reference device or writing ambit limitations, and 2) its trajectory reconstruction algorithm can reduce orientation and integral errors effectively and thus can reconstruct the trajectories of movements accurately. Our experimental results on motion trajectory reconstruction and handwritten digit recognition have successfully validated the effectiveness of the IMUPEN and its trajectory reconstruction algorithm.",
"title": ""
},
{
"docid": "992d71459b616bfe72845493a6f8f910",
"text": "Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.",
"title": ""
},
{
"docid": "c2a2c29b03ee90558325df7461124092",
"text": "Effective thermal conductivity of mixtures of uids and nanometer-size particles is measured by a steady-state parallel-plate method. The tested uids contain two types of nanoparticles, Al2O3 and CuO, dispersed in water, vacuum pump uid, engine oil, and ethylene glycol. Experimental results show that the thermal conductivities of nanoparticle– uid mixtures are higher than those of the base uids. Using theoretical models of effective thermal conductivity of a mixture, we have demonstrated that the predicted thermal conductivities of nanoparticle– uid mixtures are much lower than our measured data, indicating the de ciency in the existing models when used for nanoparticle– uid mixtures. Possible mechanisms contributing to enhancement of the thermal conductivity of the mixtures are discussed. A more comprehensive theory is needed to fully explain the behavior of nanoparticle– uid mixtures.",
"title": ""
},
{
"docid": "1278d0b3ea3f06f52b2ec6b20205f8d0",
"text": "The future global Internet is going to have to cater to users that will be largely mobile. Mobility is one of the main factors affecting the design and performance of wireless networks. Mobility modeling has been an active field for the past decade, mostly focusing on matching a specific mobility or encounter metric with little focus on matching protocol performance. This study investigates the adequacy of existing mobility models in capturing various aspects of human mobility behavior (including communal behavior), as well as network protocol performance. This is achieved systematically through the introduction of a framework that includes a multi-dimensional mobility metric space. We then introduce COBRA, a new mobility model capable of spanning the mobility metric space to match realistic traces. A methodical analysis using a range of protocol (epidemic, spraywait, Prophet, and Bubble Rap) dependent and independent metrics (modularity) of various mobility models (SMOOTH and TVC) and traces (university campuses, and theme parks) is done. Our results indicate significant gaps in several metric dimensions between real traces and existing mobility models. Our findings show that COBRA matches communal aspect and realistic protocol performance, reducing the overhead gap (w.r.t existing models) from 80% to less than 12%, showing the efficacy of our framework.",
"title": ""
},
{
"docid": "e28c2662f3948d346a00298976d9b37c",
"text": "Analysts engaged in real-time monitoring of cybersecurity incidents must quickly and accurately respond to alerts generated by intrusion detection systems. We investigated two complementary approaches to improving analyst performance on this vigilance task: a graph-based visualization of correlated IDS output and defensible recommendations based on machine learning from historical analyst behavior. We tested our approach with 18 professional cybersecurity analysts using a prototype environment in which we compared the visualization with a conventional tabular display, and the defensible recommendations with limited or no recommendations. Quantitative results showed improved analyst accuracy with the visual display and the defensible recommendations. Additional qualitative data from a \"talk aloud\" protocol illustrated the role of displays and recommendations in analysts' decision-making process. Implications for the design of future online analysis environments are discussed.",
"title": ""
},
{
"docid": "50c762b9e01347df5be904c311e42548",
"text": "This paper introduces redundant spin-transfer-torque (STT) magnetic tunnel junction (MTJ) based nonvolatile flip-flops (NVFFs) for low write-error rate (WER) operations. STT-MTJ NVFFs are key components for ultra-low power VLSI systems thanks to zero standby current, but suffers from write errors due to probabilistic switching, causing a failure backup/restore operation. To reduce the WER, redundant STT-MTJ devices are exploited in the proposed NVFFs. As one-bit information is redundantly represented, it is correctly stored upon a few bit write errors, lowering WERs compared to a conventional NVFF at the same write time. Three different redundant structures are presented and discussed in terms of WER and write energy dissipation. For performance comparisons, the proposed redundant STT-MTJ NVFFs are designed using hybrid 90nm CMOS and MTJ technologies and evaluated using NSSPICE that handles both transistors and MTJs. The simulation results show that the proposed NVFF reduces the write time to 36.2% and the write energy to 70.7% at a WER of 10-12 compared to the conventional NVFF.",
"title": ""
},
{
"docid": "4a9ad387ad16727d9ac15ac667d2b1c3",
"text": "In recent years face recognition has received substantial attention from both research communities and the market, but still remained very challenging in real applications. A lot of face recognition algorithms, along with their modifications, have been developed during the past decades. A number of typical algorithms are presented, being categorized into appearancebased and model-based schemes. For appearance-based methods, three linear subspace analysis schemes are presented, and several non-linear manifold analysis approaches for face recognition are briefly described. The model-based approaches are introduced, including Elastic Bunch Graph matching, Active Appearance Model and 3D Morphable Model methods. A number of face databases available in the public domain and several published performance evaluation results are digested. Future research directions based on the current recognition results are pointed out.",
"title": ""
},
{
"docid": "31fb6df8d386f28b63140ee2ad8d11ea",
"text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.",
"title": ""
},
{
"docid": "3d911d6eeefefd16f898200da0e1a3ef",
"text": "We introduce Reality-based User Interface System (RUIS), a virtual reality (VR) toolkit aimed for students and hobbyists, which we have used in an annually organized VR course for the past four years. RUIS toolkit provides 3D user interface building blocks for creating immersive VR applications with spatial interaction and stereo 3D graphics, while supporting affordable VR peripherals like Kinect, PlayStation Move, Razer Hydra, and Oculus Rift. We describe a novel spatial interaction scheme that combines freeform, full-body interaction with traditional video game locomotion, which can be easily implemented with RUIS. We also discuss the specific challenges associated with developing VR applications, and how they relate to the design principles behind RUIS. Finally, we validate our toolkit by comparing development difficulties experienced by users of different software toolkits, and by presenting several VR applications created with RUIS, demonstrating a variety of spatial user interfaces that it can produce.",
"title": ""
},
{
"docid": "1c117c63455c2b674798af0e25e3947c",
"text": "We are studying the manufacturing performance of semiconductor wafer fabrication plants in the US, Asia, and Europe. There are great similarities in production equipment, manufacturing processes, and products produced at semiconductor fabs around the world. However, detailed comparisons over multi-year intervals show that important quantitative indicators of productivity, including defect density (yield), major equipment production rates, wafer throughput time, and effective new process introduction to manufacturing, vary by factors of 3 to as much as 5 across an international sample of 28 fabs. We conduct on-site observations, and interviews with manufacturing personnel at all levels from operator to general manager, to better understand reasons for the observed wide variations in performance. We have identified important factors in the areas of information systems, organizational practices, process and technology improvements, and production control that correlate strongly with high productivity. Optimum manufacturing strategy is different for commodity products, high-value proprietary products, and foundry business.",
"title": ""
},
{
"docid": "df2bc3dce076e3736a195384ae6c9902",
"text": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it.",
"title": ""
},
{
"docid": "83ee7b71813ead9656e2972e700ade24",
"text": "In many visual domains (like fashion, furniture, etc.) the search for products on online platforms requires matching textual queries to image content. For example, the user provides a search query in natural language (e.g.,pink floral top) and the results obtained are of a different modality (e.g., the set of images of pink floral tops). Recent work on multimodal representation learning enables such cross-modal matching by learning a common representation space for text and image. While such representations ensure that the n-dimensional representation of pink floral top is very close to representation of corresponding images, they do not ensure that the first k1 (< n) dimensions correspond to color, the next k2 (< n) correspond to style and so on. In other words, they learn entangled representations where each dimension does not correspond to a specific attribute. We propose two simple variants which can learn disentangled common representations for the fashion domain wherein each dimension would correspond to a specific attribute (color, style, silhoutte, etc.). Our proposed variants can be integrated with any existing multimodal representation learning method. We use a large fashion dataset of over 700K fashion items crawled from multiple fashion e-commerce portals to evaluate the learned representations on four different applications from the fashion domain, namely, cross-modal image retrieval, visual search, image tagging, and query expansion. Our experimental results show that the proposed variants lead to better performance for each of these applications while learning disentangled representations.",
"title": ""
},
{
"docid": "cea9c1bab28363fc6f225b7843b8df99",
"text": "Published in Agron. J. 104:1336–1347 (2012) Posted online 29 June 2012 doi:10.2134/agronj2012.0065 Copyright © 2012 by the American Society of Agronomy, 5585 Guilford Road, Madison, WI 53711. All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. T leaf area index (LAI), the ratio of leaf area to ground area, typically reported as square meters per square meter, is a commonly used biophysical characteristic of vegetation (Watson, 1947). The LAI can be subdivided into photosynthetically active and photosynthetically inactive components. The former, the gLAI, is a metric commonly used in climate (e.g., Buermann et al., 2001), ecological (e.g., Bulcock and Jewitt, 2010), and crop yield (e.g., Fang et al., 2011) models. Because of its wide use and applicability to modeling, there is a need for a nondestructive remote estimation of gLAI across large geographic areas. Various techniques based on remotely sensed data have been utilized for assessing gLAI (see reviews by Pinter et al., 2003; Hatfield et al., 2004, 2008; Doraiswamy et al., 2003; le Maire et al., 2008, and references therein). Vegetation indices, particularly the NDVI (Rouse et al., 1974) and SR (Jordan, 1969), are the most widely used. The NDVI, however, is prone to saturation at moderate to high gLAI values (Kanemasu, 1974; Curran and Steven, 1983; Asrar et al., 1984; Huete et al., 2002; Gitelson, 2004; Wu et al., 2007; González-Sanpedro et al., 2008) and requires reparameterization for different crops and species. The saturation of NDVI has been attributed to insensitivity of reflectance in the red region at moderate to high gLAI values due to the high absorption coefficient of chlorophyll. For gLAI below 3 m2/m2, total absorption by a canopy in the red range reaches 90 to 95%, and further increases in gLAI do not bring additional changes in absorption and reflectance (Hatfield et al., 2008; Gitelson, 2011). Another reason for the decrease in the sensitivity of NDVI to moderate to high gLAI values is the mathematical formulation of that index. At moderate to high gLAI, the NDVI is dominated by nearinfrared (NIR) reflectance. Because scattering by the cellular or leaf structure causes the NIR reflectance to be high and the absorption by chlorophyll causes the red reflectance to be low, NIR reflectance is considerably greater than red reflectance: e.g., for gLAI >3 m2/m2, NIR reflectance is >40% while red reflectance is <5%. Thus, NDVI becomes insensitive to changes in both red and NIR reflectance. Other commonly used VIs include the Enhanced Vegetation Index, EVI (Liu and Huete, 1995; Huete et al., 1997, 2002), its ABStrAct",
"title": ""
},
{
"docid": "fba7801d0b187a9a5fbb00c9d4690944",
"text": "Acute pulmonary embolism (PE) poses a significant burden on health and survival. Its severity ranges from asymptomatic, incidentally discovered subsegmental thrombi to massive, pressor-dependent PE complicated by cardiogenic shock and multisystem organ failure. Rapid and accurate risk stratification is therefore of paramount importance to ensure the highest quality of care. This article critically reviews currently available and emerging tools for risk-stratifying acute PE, and particularly for distinguishing between elevated (intermediate) and low risk among normotensive patients. We focus on the potential value of risk assessment strategies for optimizing severity-adjusted management. Apart from reviewing the current evidence on advanced early therapy of acute PE (thrombolysis, surgery, catheter interventions, vena cava filters), we discuss recent advances in oral anticoagulation with vitamin K antagonists, and with new direct inhibitors of factor Xa and thrombin, which may contribute to profound changes in the treatment and secondary prophylaxis of venous thrombo-embolism in the near future.",
"title": ""
},
{
"docid": "63063c0a2b08f068c11da6d80236fa87",
"text": "This paper addresses the problem of hallucinating the missing high-resolution (HR) details of a low-resolution (LR) video while maintaining the temporal coherence of the hallucinated HR details by using dynamic texture synthesis (DTS). Most existing multi-frame-based video super-resolution (SR) methods suffer from the problem of limited reconstructed visual quality due to inaccurate sub-pixel motion estimation between frames in a LR video. To achieve high-quality reconstruction of HR details for a LR video, we propose a texture-synthesis-based video super-resolution method, in which a novel DTS scheme is proposed to render the reconstructed HR details in a time coherent way, so as to effectively address the temporal incoherence problem caused by traditional texture synthesis based image SR methods. To further reduce the complexity of the proposed method, our method only performs the DTS-based SR on a selected set of key-frames, while the HR details of the remaining non-key-frames are simply predicted using the bi-directional overlapped block motion compensation. Experimental results demonstrate that the proposed method achieves significant subjective and objective quality improvement over state-of-the-art video SR methods.",
"title": ""
}
] | scidocsrr |
7bd3876d9badd720037ed7ffece74b62 | ARmatika: 3D game for arithmetic learning with Augmented Reality technology | [
{
"docid": "ae4c9e5df340af3bd35ae5490083c72a",
"text": "The massive technological advancements around the world have created significant challenging competition among companies where each of the companies tries to attract the customers using different techniques. One of the recent techniques is Augmented Reality (AR). The AR is a new technology which is capable of presenting possibilities that are difficult for other technologies to offer and meet. Nowadays, numerous augmented reality applications have been used in the industry of different kinds and disseminated all over the world. AR will really alter the way individuals view the world. The AR is yet in its initial phases of research and development at different colleges and high-tech institutes. Throughout the last years, AR apps became transportable and generally available on various devices. Besides, AR begins to occupy its place in our audio-visual media and to be used in various fields in our life in tangible and exciting ways such as news, sports and is used in many domains in our life such as electronic commerce, promotion, design, and business. In addition, AR is used to facilitate the learning whereas it enables students to access location-specific information provided through various sources. Such growth and spread of AR applications pushes organizations to compete one another, and every one of them exerts its best to gain the customers. This paper provides a comprehensive study of AR including its history, architecture, applications, current challenges and future trends.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
},
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "1eafc02a19766817536f3da89230b4cf",
"text": "Basically, Bayesian Belief Networks (BBNs) as probabilistic tools provide suitable facilities for modelling process under uncertainty. A BBN applies a Directed Acyclic Graph (DAG) for encoding relations between all variables in state of problem. Finding the beststructure (structure learning) ofthe DAG is a classic NP-Hard problem in BBNs. In recent years, several algorithms are proposed for this task such as Hill Climbing, Greedy Thick Thinning and K2 search. In this paper, we introduced Simulated Annealing algorithm with complete details as new method for BBNs structure learning. Finally, proposed algorithm compared with other structure learning algorithms based on classification accuracy and construction time on valuable databases. Experimental results of research show that the simulated annealing algorithmis the bestalgorithmfrom the point ofconstructiontime but needs to more attention for classification process.",
"title": ""
},
{
"docid": "82c8a692e3b39e58bd73997b2e922c2c",
"text": "The traditional approaches to building survivable systems assume a framework of absolute trust requiring a provably impenetrable and incorruptible Trusted Computing Base (TCB). Unfortunately, we don’t have TCB’s, and experience suggests that we never will. We must instead concentrate on software systems that can provide useful services even when computational resource are compromised. Such a system will 1) Estimate the degree to which a computational resources may be trusted using models of possible compromises. 2) Recognize that a resource is compromised by relying on a system for long term monitoring and analysis of the computational infrastructure. 3) Engage in self-monitoring, diagnosis and adaptation to best achieve its purposes within the available infrastructure. All this, in turn, depends on the ability of the application, monitoring, and control systems to engage in rational decision making about what resources they should use in order to achieve the best ratio of expected benefit to risk.",
"title": ""
},
{
"docid": "245204d71a7ba2f56897ccb67f26b595",
"text": "The objective of the study is to describe distinguishing characteristics of commercial sexual exploitation of children/child sex trafficking victims (CSEC) who present for health care in the pediatric setting. This is a retrospective study of patients aged 12-18 years who presented to any of three pediatric emergency departments or one child protection clinic, and who were identified as suspected victims of CSEC. The sample was compared with gender and age-matched patients with allegations of child sexual abuse/sexual assault (CSA) without evidence of CSEC on variables related to demographics, medical and reproductive history, high-risk behavior, injury history and exam findings. There were 84 study participants, 27 in the CSEC group and 57 in the CSA group. Average age was 15.7 years for CSEC patients and 15.2 years for CSA patients; 100% of the CSEC and 94.6% of the CSA patients were female. The two groups significantly differed in 11 evaluated areas with the CSEC patients more likely to have had experiences with violence, substance use, running away from home, and involvement with child protective services and/or law enforcement. CSEC patients also had a longer history of sexual activity. Adolescent CSEC victims differ from sexual abuse victims without evidence of CSEC in their reproductive history, high risk behavior, involvement with authorities, and history of violence.",
"title": ""
},
{
"docid": "38382c04e7dc46f5db7f2383dcae11fb",
"text": "Motor schemas serve as the basic unit of behavior specification for the navigation of a mobile robot. They are multiple concurrent processes that operate in conjunction with associated perceptual schemas and contribute independently to the overall concerted action of the vehicle. The motivation behind the use of schemas for this domain is drawn from neuroscientific, psychological, and robotic sources. A variant of the potential field method is used to produce the appropriate velocity and steering commands for the robot. Simulation results and actual mobile robot experiments demonstrate the feasibility of this approach.",
"title": ""
},
{
"docid": "fca196c6900f43cf6fd711f8748c6768",
"text": "The fatigue fracture of structural details subjected to cyclic loads mostly occurs at a critical cross section with stress concentration. The welded joint is particularly dangerous location because of sinergetic harmful effects of stress concentration, tensile residual stresses, deffects, microstructural heterogeneity. Because of these reasons many methods for improving the fatigue resistance of welded joints are developed. Significant increase in fatigue strength and fatigue life was proved and could be attributed to improving weld toe profile, the material microstructure, removing deffects at the weld toe and modifying the original residual stress field. One of the most useful methods to improve fatigue behaviour of welded joints is TIG dressing. The magnitude of the improvement in fatigue performance depends on base material strength, type of welded joint and type of loading. Improvements of the fatigue behaviour of the welded joints in low-carbon structural steel treated by TIG dressing is considered in this paper.",
"title": ""
},
{
"docid": "5b6f55af9994b2c2491344fca573502d",
"text": "From times immemorial, colorants, and flavorings have been used in foods. Color and flavor are the major attributes to the quality of a food product, affecting the appearance and acceptance of the product. As a consequence of the increased demand of natural flavoring and colorant from industries, there is a renewed interest in the research on the composition and recovery of natural food flavors and colors. Over the years, numerous procedures have been proposed for the isolation of aromatic compounds and colors from plant materials. Generally, the methods of extraction followed for aroma and pigment from plant materials are solvent extraction, hydro-distillation, steam distillation, and super critical carbon dioxide extraction. The application of enzymes in the extraction of oil from oil seeds like sunflower, corn, coconut, olives, avocado etc. are reported in literature. There is a great potential for this enzyme-based extraction technology with the selection of appropriate enzymes with optimized operating conditions. Various enzyme combinations are used to loosen the structural integrity of botanical material thereby enhancing the extraction of the desired flavor and color components. Recently enzymes have been used for the extraction of flavor and color from plant materials, as a pre-treatment of the raw material before subjecting the plant material to hydro distillation/solvent extraction. A deep knowledge of enzymes, their mode of action, conditions for optimum activity, and selection of the right type of enzymes are essential to use them effectively for extraction. Although the enzyme hydrolases such as lipases, proteases (chymotrypsin, subtilisin, thermolysin, and papain), esterases use water as a substrate for the reaction, they are also able to accept other nucleophiles such as alcohols, amines, thio-esters, and oximes. Advantages of enzyme-assisted extraction of flavor and color in some of the plant materials in comparison with conventional methods are dealt with in this reveiw.",
"title": ""
},
{
"docid": "46dc94fe4ba164ccf1cb37810112883f",
"text": "The purpose of the study was to test four predictions derived from evolutionary (sexual strategies) theory. The central hypothesis was that men and women possess different emotional mechanisms that motivate and evaluate sexual activities. Consequently, even when women express indifference to emotional involvement and commitment and voluntarily engage in casual sexual relations, their goals, their feelings about the experience, and the associations between their sexual behavior and prospects for long-term investment differ significantly from those of men. Women's sexual behavior is associated with their perception of investment potential: long-term, short-term, and partners' ability and willingness to invest. For men,these associations are weaker or inversed. Regression analyses of survey data from 333 male and 363 female college students revealed the following: Greater permissiveness of sexual attitudes was positively associated with number of sex partners; this association was not moderated by sex of subject (Prediction 1); even when women deliberately engaged in casual sexual relations, thoughts that expressed worry and vulnerability crossed their minds; for females, greater number of partners was associated with increased worry-vulnerability whereas for males the trend was the opposite (Prediction 2); with increasing numbers of sex partners, marital thoughts decreased; this finding was not moderated by sex of subject; this finding did not support Prediction 3; for both males and females, greater number of partners was related to larger numbers of one-night stands, partners foreseen in the next 5 years, and deliberately casual sexual relations. This trend was significantly stronger for males than for females (Prediction 4).",
"title": ""
},
{
"docid": "636f5002b3ced8a541df3e0568604f71",
"text": "We report density functional theory (M06L) calculations including Poisson-Boltzmann solvation to determine the reaction pathways and barriers for the hydrogen evolution reaction (HER) on MoS2, using both a periodic two-dimensional slab and a Mo10S21 cluster model. We find that the HER mechanism involves protonation of the electron rich molybdenum hydride site (Volmer-Heyrovsky mechanism), leading to a calculated free energy barrier of 17.9 kcal/mol, in good agreement with the barrier of 19.9 kcal/mol estimated from the experimental turnover frequency. Hydronium protonation of the hydride on the Mo site is 21.3 kcal/mol more favorable than protonation of the hydrogen on the S site because the electrons localized on the Mo-H bond are readily transferred to form dihydrogen with hydronium. We predict the Volmer-Tafel mechanism in which hydrogen atoms bound to molybdenum and sulfur sites recombine to form H2 has a barrier of 22.6 kcal/mol. Starting with hydrogen atoms on adjacent sulfur atoms, the Volmer-Tafel mechanism goes instead through the M-H + S-H pathway. In discussions of metal chalcogenide HER catalysis, the S-H bond energy has been proposed as the critical parameter. However, we find that the sulfur-hydrogen species is not an important intermediate since the free energy of this species does not play a direct role in determining the effective activation barrier. Rather we suggest that the kinetic barrier should be used as a descriptor for reactivity, rather than the equilibrium thermodynamics. This is supported by the agreement between the calculated barrier and the experimental turnover frequency. These results suggest that to design a more reactive catalyst from edge exposed MoS2, one should focus on lowering the reaction barrier between the metal hydride and a proton from the hydronium in solution.",
"title": ""
},
{
"docid": "eb3886f7e212f2921b3333a8e1b7b0ed",
"text": "With the resurgence of head-mounted displays for virtual reality, users need new input devices that can accurately track their hands and fingers in motion. We introduce Finexus, a multipoint tracking system using magnetic field sensing. By instrumenting the fingertips with electromagnets, the system can track fine fingertip movements in real time using only four magnetic sensors. To keep the system robust to noise, we operate each electromagnet at a different frequency and leverage bandpass filters to distinguish signals attributed to individual sensing points. We develop a novel algorithm to efficiently calculate the 3D positions of multiple electromagnets from corresponding field strengths. In our evaluation, we report an average accuracy of 1.33 mm, as compared to results from an optical tracker. Our real-time implementation shows Finexus is applicable to a wide variety of human input tasks, such as writing in the air.",
"title": ""
},
{
"docid": "19e070089a8495a437e81da50f3eb21c",
"text": "Mobile payment refers to the use of mobile devices to conduct payment transactions. Users can use mobile devices for remote and proximity payments; moreover, they can purchase digital contents and physical goods and services. It offers an alternative payment method for consumers. However, there are relative low adoption rates in this payment method. This research aims to identify and explore key factors that affect the decision of whether to use mobile payments. Two well-established theories, the Technology Acceptance Model (TAM) and the Innovation Diffusion Theory (IDT), are applied to investigate user acceptance of mobile payments. Survey data from mobile payments users will be used to test the proposed hypothesis and the model.",
"title": ""
},
{
"docid": "c71f3284872169d1f506927000df557b",
"text": "Natural rewards and drugs of abuse can alter dopamine signaling, and ventral tegmental area (VTA) dopaminergic neurons are known to fire action potentials tonically or phasically under different behavioral conditions. However, without technology to control specific neurons with appropriate temporal precision in freely behaving mammals, the causal role of these action potential patterns in driving behavioral changes has been unclear. We used optogenetic tools to selectively stimulate VTA dopaminergic neuron action potential firing in freely behaving mammals. We found that phasic activation of these neurons was sufficient to drive behavioral conditioning and elicited dopamine transients with magnitudes not achieved by longer, lower-frequency spiking. These results demonstrate that phasic dopaminergic activity is sufficient to mediate mammalian behavioral conditioning.",
"title": ""
},
{
"docid": "b1827b03bc37fde80f99b73b6547c454",
"text": "When constructing the model of a word by collecting interval-valued data from a group of individuals, both interpersonal and intrapersonal uncertainties coexist. Similar to the interval type-2 fuzzy set (IT2 FS) used in the enhanced interval approach (EIA), the Cloud model characterized by only three parameters can manage both uncertainties. Thus, based on the Cloud model, this paper proposes a new representation model for a word from interval-valued data. In our proposed method, firstly, the collected data intervals are preprocessed to remove the bad ones. Secondly, the fuzzy statistical method is used to compute the histogram of the surviving intervals. Then, the generated histogram is fitted by a Gaussian curve function. Finally, the fitted results are mapped into the parameters of a Cloud model to obtain the parametric model for a word. Compared with eight or nine parameters needed by an IT2 FS, only three parameters are needed to represent a Cloud model. Therefore, we develop a much more parsimonious parametric model for a word based on the Cloud model. Generally a simpler representation model with less parameters usually means less computations and memory requirements in applications. Moreover, the comparison experiments with the recent EIA show that, our proposed method can not only obtain much thinner footprints of uncertainty (FOUs) but also capture sufficient uncertainties of words. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "d365eceff514375d7ae19f70aec71c08",
"text": "Importance\nSeveral studies now provide evidence of ketamine hydrochloride's ability to produce rapid and robust antidepressant effects in patients with mood and anxiety disorders that were previously resistant to treatment. Despite the relatively small sample sizes, lack of longer-term data on efficacy, and limited data on safety provided by these studies, they have led to increased use of ketamine as an off-label treatment for mood and other psychiatric disorders.\n\n\nObservations\nThis review and consensus statement provides a general overview of the data on the use of ketamine for the treatment of mood disorders and highlights the limitations of the existing knowledge. While ketamine may be beneficial to some patients with mood disorders, it is important to consider the limitations of the available data and the potential risk associated with the drug when considering the treatment option.\n\n\nConclusions and Relevance\nThe suggestions provided are intended to facilitate clinical decision making and encourage an evidence-based approach to using ketamine in the treatment of psychiatric disorders considering the limited information that is currently available. This article provides information on potentially important issues related to the off-label treatment approach that should be considered to help ensure patient safety.",
"title": ""
},
{
"docid": "4620525bfbfd492f469e948b290d73a2",
"text": "This thesis contains the complete end-to-end simulation, development, implementation, and calibration of the wide bandwidth, low-Q, Kiwi-SAS synthetic aperture sonar (SAS). Through the use of a very stable towfish, a new novel wide bandwidth transducer design, and autofocus procedures, high-resolution diffraction limited imagery is produced. As a complete system calibration was performed, this diffraction limited imagery is not only geometrically calibrated, it is also calibrated for target cross-section or target strength estimation. Is is important to note that the diffraction limited images are formed without access to any form of inertial measurement information. Previous investigations applying the synthetic aperture technique to sonar have developed processors based on exact, but inefficient, spatial-temporal domain time-delay and sum beamforming algorithms, or they have performed equivalent operations in the frequency domain using fast-correlation techniques (via the fast Fourier transform (FFT)). In this thesis, the algorithms used in the generation of synthetic aperture radar (SAR) images are derived in their wide bandwidth forms and it is shown that these more efficient algorithms can be used to form diffraction limited SAS images. Several new algorithms are developed; accelerated chirp scaling algorithm represents an efficient method for processing synthetic aperture data, while modified phase gradient autofocus and a low-Q autofocus routine based on prominent point processing are used to focus both simulated and real target data that has been corrupted by known and unknown motion or medium propagation errors.",
"title": ""
},
{
"docid": "260fa16461d510094d810f04c333a220",
"text": "We propose a novel VAE-based deep autoencoder model that can learn disentangled latent representations in a fully unsupervised manner, endowed with the ability to identify all meaningful sources of variation and their cardinality. Our model, dubbed Relevance-Factor-VAE, leverages the total correlation (TC) in the latent space to achieve the disentanglement goal, but also addresses the key issue of existing approaches which cannot distinguish between meaningful and nuisance factors of latent variation, often the source of considerable degradation in disentanglement performance. We tackle this issue by introducing the so-called relevance indicator variables that can be automatically learned from data, together with the VAE parameters. Our model effectively focuses the TC loss onto the relevant factors only by tolerating large prior KL divergences, a desideratum justified by our semi-parametric theoretical analysis. Using a suite of disentanglement metrics, including a newly proposed one, as well as qualitative evidence, we demonstrate that our model outperforms existing methods across several challenging benchmark datasets.",
"title": ""
},
{
"docid": "4791e1e3ccde1260887d3a80ea4577b6",
"text": "The fabulous results of Deep Convolution Neural Networks in computer vision and image analysis have recently attracted considerable attention from researchers of other application domains as well. In this paper we present NgramCNN, a neural network architecture we designed for sentiment analysis of long text documents. It uses pretrained word embeddings for dense feature representation and a very simple single-layer classifier. The complexity is encapsulated in feature extraction and selection parts that benefit from the effectiveness of convolution and pooling layers. For evaluation we utilized different kinds of emotional text datasets and achieved an accuracy of 91.2 % accuracy on the popular IMDB movie reviews. NgramCNN is more accurate than similar shallow convolution networks or deeper recurrent networks that were used as baselines. In the future, we intent to generalize the architecture for state of the art results in sentiment analysis of variable-length texts.",
"title": ""
},
{
"docid": "d82897a2778b3ef6ddfe062f2c778451",
"text": "Inspired by the recent advances in deep learning, we propose a novel iterative belief propagation-convolutional neural network (BP-CNN) architecture to exploit noise correlation for channel decoding under correlated noise. The standard BP decoder is used to estimate the coded bits, followed by a CNN to remove the estimation errors of the BP decoder and obtain a more accurate estimation of the channel noise. Iterating between BP and CNN will gradually improve the decoding SNR and hence result in better decoding performance. To train a well-behaved CNN model, we define a new loss function which involves not only the accuracy of the noise estimation but also the normality test for the estimation errors, i.e., to measure how likely the estimation errors follow a Gaussian distribution. The introduction of the normality test to the CNN training shapes the residual noise distribution and further reduces the BER of the iterative decoding, compared to using the standard quadratic loss function. We carry out extensive experiments to analyze and verify the proposed framework.",
"title": ""
},
{
"docid": "73aa720bebc5f2fa1930930fb4185490",
"text": "A CMOS OTA-C notch filter for 50Hz interference was presented in this paper. The OTAs were working in weak inversion region in order to achieve ultra low transconductance and power consumptions. The circuits were designed using SMIC mixed-signal 0.18nm 1P6M process. The post-annotated simulation indicated that an attenuation of 47.2dB for power line interference and a 120pW consumption. The design achieved a dynamic range of 75.8dB and a THD of 0.1%, whilst the input signal was a 1 Hz 20mVpp sine wave.",
"title": ""
},
{
"docid": "d2d39b17b4047dd43e19ac4272b31c7e",
"text": "Lignocellulose is a term for plant materials that are composed of matrices of cellulose, hemicellulose, and lignin. Lignocellulose is a renewable feedstock for many industries. Lignocellulosic materials are used for the production of paper, fuels, and chemicals. Typically, industry focuses on transforming the polysaccharides present in lignocellulose into products resulting in the incomplete use of this resource. The materials that are not completely used make up the underutilized streams of materials that contain cellulose, hemicellulose, and lignin. These underutilized streams have potential for conversion into valuable products. Treatment of these lignocellulosic streams with bacteria, which specifically degrade lignocellulose through the action of enzymes, offers a low-energy and low-cost method for biodegradation and bioconversion. This review describes lignocellulosic streams and summarizes different aspects of biological treatments including the bacteria isolated from lignocellulose-containing environments and enzymes which may be used for bioconversion. The chemicals produced during bioconversion can be used for a variety of products including adhesives, plastics, resins, food additives, and petrochemical replacements.",
"title": ""
}
] | scidocsrr |
e690711cb18766db09e76ccc5c36c03c | VisReduce: Fast and responsive incremental information visualization of large datasets | [
{
"docid": "98e170b4beb59720e49916835572d1b0",
"text": "Scatterplot matrices (SPLOMs), parallel coordinates, and glyphs can all be used to visualize the multiple continuous variables (i.e., dependent variables or measures) in multidimensional multivariate data. However, these techniques are not well suited to visualizing many categorical variables (i.e., independent variables or dimensions). To visualize multiple categorical variables, 'hierarchical axes' that 'stack dimensions' have been used in systems like Polaris and Tableau. However, this approach does not scale well beyond a small number of categorical variables. Emerson et al. [8] extend the matrix paradigm of the SPLOM to simultaneously visualize several categorical and continuous variables, displaying many kinds of charts in the matrix depending on the kinds of variables involved. We propose a variant of their technique, called the Generalized Plot Matrix (GPLOM). The GPLOM restricts Emerson et al.'s technique to only three kinds of charts (scatterplots for pairs of continuous variables, heatmaps for pairs of categorical variables, and barcharts for pairings of categorical and continuous variable), in an effort to make it easier to understand. At the same time, the GPLOM extends Emerson et al.'s work by demonstrating interactive techniques suited to the matrix of charts. We discuss the visual design and interactive features of our GPLOM prototype, including a textual search feature allowing users to quickly locate values or variables by name. We also present a user study that compared performance with Tableau and our GPLOM prototype, that found that GPLOM is significantly faster in certain cases, and not significantly slower in other cases.",
"title": ""
}
] | [
{
"docid": "40b18b69a3a4011f163d06ef476d9954",
"text": "Potential benefits of using online social network data for clinical studies on depression are tremendous. In this paper, we present a preliminary result on building a research framework that utilizes real-time moods of users captured in the Twitter social network and explore the use of language in describing depressive moods. First, we analyzed a random sample of tweets posted by the general Twitter population during a two-month period to explore how depression is talked about in Twitter. A large number of tweets contained detailed information about depressed feelings, status, as well as treatment history. Going forward, we conducted a study on 69 participants to determine whether the use of sentiment words of depressed users differed from a typical user. We found that the use of words related to negative emotions and anger significantly increased among Twitter users with major depressive symptoms compared to those otherwise. However, no difference was found in the use of words related to positive emotions between the two groups. Our work provides several evidences that online social networks provide meaningful data for capturing depressive moods of users.",
"title": ""
},
{
"docid": "db6e0dff6ba7bd5a0041ef4affe50e9b",
"text": "The flipped voltage follower (FVF), a variant of the common-drain transistor amplifier, comprising local feedback, finds application in circuits such as voltage buffers, current mirrors, class AB amplifiers, frequency compensation circuits and low dropout voltage regulators (LDOs). One of the most important characteristics of the FVF, is its low output impedance. In this tutorial-flavored paper, we perform a theoretical analysis of the transfer function, poles and zeros of the output impedance of the FVF and correlate it with transistor-level simulation results. Utilization of the FVF and its variants has wide application in the analog, mixed-signal and power management circuit design space.",
"title": ""
},
{
"docid": "482ff6c78f7b203125781f5947990845",
"text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.",
"title": ""
},
{
"docid": "2e89bc59f85b14cf40a868399a3ce351",
"text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.",
"title": ""
},
{
"docid": "58c357c0edd0dfe07ec699d4fba0514b",
"text": "There exist a multitude of execution models available today for a developer to target. The choices vary from general purpose processors to fixed-function hardware accelerators with a large number of variations in-between. There is a growing demand to assess the potential benefits of porting or rewriting an application to a target architecture in order to fully exploit the benefits of performance and/or energy efficiency offered by such targets. However, as a first step of this process, it is necessary to determine whether the application has characteristics suitable for acceleration.\n In this paper, we present Peruse, a tool to characterize the features of loops in an application and to help the programmer understand the amenability of loops for acceleration. We consider a diverse set of features ranging from loop characteristics (e.g., loop exit points) and operation mixes (e.g., control vs data operations) to wider code region characteristics (e.g., idempotency, vectorizability). Peruse is language, architecture, and input independent and uses the intermediate representation of compilers to do the characterization. Using static analyses makes Peruse scalable and enables analysis of large applications to identify and extract interesting loops suitable for acceleration. We show analysis results for unmodified applications from the SPEC CPU benchmark suite, Polybench, and HPC workloads.\n For an end-user it is more desirable to get an estimate of the potential speedup due to acceleration. We use the workload characterization results of Peruse as features and develop a machine-learning based model to predict the potential speedup of a loop when off-loaded to a fixed function hardware accelerator. We use the model to predict the speedup of loops selected by Peruse and achieve an accuracy of 79%.",
"title": ""
},
{
"docid": "acbdb3f3abf3e56807a4e7f60869a2ee",
"text": "In this paper we present a new approach to high quality 3D object reconstruction. Starting from a calibrated sequence of color images, the algorithm is able to reconstruct both the 3D geometry and the texture. The core of the method is based on a deformable model, which defines the framework where texture and silhouette information can be fused. This is achieved by defining two external forces based on the images: a texture driven force and a silhouette driven force. The texture force is computed in two steps: a multi-stereo correlation voting approach and a gradient vector flow diffusion. Due to the high resolution of the voting approach, a multi-grid version of the gradient vector flow has been developed. Concerning the silhouette force, a new formulation of the silhouette constraint is derived. It provides a robust way to integrate the silhouettes in the evolution algorithm. As a consequence, we are able to recover the apparent contours of the model at the end of the iteration process. Finally, a texture map is computed from the original images for the reconstructed 3D model.",
"title": ""
},
{
"docid": "1cb47f75cde728f7ba7c75b54516bc46",
"text": "This paper considers the electrical actuation of aircraft wing surfaces, with particular emphasis on flap systems. It discusses existing hydraulic and electrohydraulic systems and proposes an electrical alternative, examining the potential system benefits in terms of increased functionality, maintenance, and life-cycle costs. This paper then progresses to describe a full-scale actuation demonstrator of the flap system, including the high-speed electrical drive, step-down gearbox, and flaps. Detailed descriptions of the fault-tolerant motor, power electronics, control architecture, and position sensor systems are given, along with a range of test results, demonstrating the system in operation.",
"title": ""
},
{
"docid": "d931f6f9960e8688c2339a27148efe74",
"text": "Most knowledge on the Web is encoded as natural language text, which is convenient for human users but very difficult for software agents to understand. Even with increased use of XML-encoded information, software agents still need to process the tags and literal symbols using application dependent semantics. The Semantic Web offers an approach in which knowledge can be published by and shared among agents using symbols with a well defined, machine-interpretable semantics. The Semantic Web is a “web of data” in that (i) both ontologies and instance data are published in a distributed fashion; (ii) symbols are either ‘literals’ or universally addressable ‘resources’ (URI references) each of which comes with unique semantics; and (iii) information is semi-structured. The Friend-of-a-Friend (FOAF) project (http://www.foafproject.org/) is a good application of the Semantic Web in which users publish their personal profiles by instantiating the foaf:Personclass and adding various properties drawn from any number of ontologies. The Semantic Web’s distributed nature raises significant data access problems – how can an agent discover, index, search and navigate knowledge on the Semantic Web? Swoogle (Dinget al. 2004) was developed to facilitate webscale semantic web data access by providing these services to both human and software agents. It focuses on two levels of knowledge granularity: URI based semantic web vocabulary andsemantic web documents (SWDs), i.e., RDF and OWL documents encoded in XML, NTriples or N3. Figure 1 shows Swoogle’s architecture. The discovery component automatically discovers and revisits SWDs using a set of integrated web crawlers. The digest component computes metadata for SWDs and semantic web terms (SWTs) as well as identifies relations among them, e.g., “an SWD instantiates an SWT class”, and “an SWT class is the domain of an SWT property”. The analysiscomponent uses cached SWDs and their metadata to derive analytical reports, such as classifying ontologies among SWDs and ranking SWDs by their importance. The s rvicecomponent sup-",
"title": ""
},
{
"docid": "a20a03fcb848c310cb966f6e6bc37c86",
"text": "A broad class of problems at the core of computational imaging, sensing, and low-level computer vision reduces to the inverse problem of extracting latent images that follow a prior distribution, from measurements taken under a known physical image formation model. Traditionally, hand-crafted priors along with iterative optimization methods have been used to solve such problems. In this paper we present unrolled optimization with deep priors, a principled framework for infusing knowledge of the image formation into deep networks that solve inverse problems in imaging, inspired by classical iterative methods. We show that instances of the framework outperform the state-of-the-art by a substantial margin for a wide variety of imaging problems, such as denoising, deblurring, and compressed sensing magnetic resonance imaging (MRI). Moreover, we conduct experiments that explain how the framework is best used and why it outperforms previous methods.",
"title": ""
},
{
"docid": "45c3d3a765e565ad3b870b95f934592a",
"text": "This paper describes a fully automated framework to generate realistic head motion, eye gaze, and eyelid motion simultaneously based on live (or recorded) speech input. Its central idea is to learn separate yet interrelated statistical models for each component (head motion, gaze, or eyelid motion) from a prerecorded facial motion data set: 1) Gaussian Mixture Models and gradient descent optimization algorithm are employed to generate head motion from speech features; 2) Nonlinear Dynamic Canonical Correlation Analysis model is used to synthesize eye gaze from head motion and speech features, and 3) nonnegative linear regression is used to model voluntary eye lid motion and log-normal distribution is used to describe involuntary eye blinks. Several user studies are conducted to evaluate the effectiveness of the proposed speech-driven head and eye motion generator using the well-established paired comparison methodology. Our evaluation results clearly show that this approach can significantly outperform the state-of-the-art head and eye motion generation algorithms. In addition, a novel mocap+video hybrid data acquisition technique is introduced to record high-fidelity head movement, eye gaze, and eyelid motion simultaneously.",
"title": ""
},
{
"docid": "a7c9d58c49f1802b94395c6f12c2d6dd",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7c097c95fb50750c082877ab7e277cd9",
"text": "40BAbstract: Disease Intelligence (DI) is based on the acquisition and aggregation of fragmented knowledge of diseases at multiple sources all over the world to provide valuable information to doctors, researchers and information seeking community. Some diseases have their own characteristics changed rapidly at different places of the world and are reported on documents as unrelated and heterogeneous information which may be going unnoticed and may not be quickly available. This research presents an Ontology based theoretical framework in the context of medical intelligence and country/region. Ontology is designed for storing information about rapidly spreading and changing diseases with incorporating existing disease taxonomies to genetic information of both humans and infectious organisms. It further maps disease symptoms to diseases and drug effects to disease symptoms. The machine understandable disease ontology represented as a website thus allows the drug effects to be evaluated on disease symptoms and exposes genetic involvements in the human diseases. Infectious agents which have no known place in an existing classification but have data on genetics would still be identified as organisms through the intelligence of this system. It will further facilitate researchers on the subject to try out different solutions for curing diseases.",
"title": ""
},
{
"docid": "c5f0155b2f6ce35a9cbfa38773042833",
"text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.",
"title": ""
},
{
"docid": "ce22073b8dbc3a910fa8811a2a8e5c87",
"text": "Ethernet is going to play a major role in automotive communications, thus representing a significant paradigm shift in automotive networking. Ethernet technology will allow for multiple in-vehicle systems (such as, multimedia/infotainment, camera-based advanced driver assistance and on-board diagnostics) to simultaneously access information over a single unshielded twisted pair cable. The leading technology for automotive applications is the IEEE Audio Video Bridging (AVB), which offers several advantages, such as open specification, multiple sources of electronic components, high bandwidth, the compliance with the challenging EMC/EMI automotive requirements, and significant savings on cabling costs, thickness and weight. This paper surveys the state of the art on Ethernet-based automotive communications and especially on the IEEE AVB, with a particular focus on the way to provide support to the so-called scheduled traffic, that is a class of time-sensitive traffic (e.g., control traffic) that is transmitted according to a time schedule.",
"title": ""
},
{
"docid": "d558f980b85bf970a7b57c00df361591",
"text": "URL shortener services today have come to play an important role in our social media landscape. They direct user attention and disseminate information in online social media such as Twitter or Facebook. Shortener services typically provide short URLs in exchange for long URLs. These short URLs can then be shared and diffused by users via online social media, e-mail or other forms of electronic communication. When another user clicks on the shortened URL, she will be redirected to the underlying long URL. Shortened URLs can serve many legitimate purposes, such as click tracking, but can also serve illicit behavior such as fraud, deceit and spam. Although usage of URL shortener services today is ubiquituous, our research community knows little about how exactly these services are used and what purposes they serve. In this paper, we study usage logs of a URL shortener service that has been operated by our group for more than a year. We expose the extent of spamming taking place in our logs, and provide first insights into the planetary-scale of this problem. Our results are relevant for researchers and engineers interested in understanding the emerging phenomenon and dangers of spamming via URL shortener services.",
"title": ""
},
{
"docid": "0d11c687fbf4a0834e753145fec7d7d2",
"text": "A single line feed stacked microstrip antenna for 4G system is presented. The proposed antenna with two properly square patches are stacked. The top patch can perform as a driven element is design on 2.44 GHz and lower patch is also design on 2.44 GHz. The performance of proposed antenna for 4G band frequency (2400-2500 MHz). Also gating the improvement of bandwidth (15%) and antenna efficiency (95%) are very high compared to conventional antenna. Key word — Microstrip patch antenna; stacked, 4G, Antenna efficiency.",
"title": ""
},
{
"docid": "3fe3d1f8b5e141b9044686491fffe12f",
"text": "Data stream is a potentially massive, continuous, rapid sequence of data information. It has aroused great concern and research upsurge in the field of data mining. Clustering is an effective tool of data mining, so data stream clustering will undoubtedly become the focus of the study in data stream mining. In view of the characteristic of the high dimension, dynamic, real-time, many effective data stream clustering algorithms have been proposed. In addition, data stream information are not deterministic and always exist outliers and contain noises, so developing effective data stream clustering algorithm is crucial. This paper reviews the development and trend of data stream clustering and analyzes typical data stream clustering algorithms proposed in recent years, such as Birch algorithm, Local Search algorithm, Stream algorithm and CluStream algorithm. We also summarize the latest research achievements in this field and introduce some new strategies to deal with outliers and noise data. At last, we put forward the focal points and difficulties of future research for data stream clustering.",
"title": ""
},
{
"docid": "133af3ba5310a05ac3bfdaf6178feb6f",
"text": "A new gate drive for high-voltage, high-power IGBT has been developed for the SLAC NLC (Next Linear Collider) Solid State Induction Modulator. This paper describes the design and implementation of a driver that allows an IGBT module rated at 800 A/3300 V to switch up to 3000 A at 2200 V in 3 /spl mu/s with a rate of current rise of more than 10000 A//spl mu/s, while still being short circuit protected. Issues regarding fast turn on, high de-saturation voltage detection, and low short circuit peak current are presented. A novel approach is also used to counter the effect of unequal current sharing between parallel chips inside most high-power IGBT modules. It effectively reduces the collector-emitter peak currents and thus protects the IGBT from being destroyed during soft short circuit conditions at high di/dt.",
"title": ""
},
{
"docid": "1830c839960f8ce9b26c906cc21e2a39",
"text": "This comparative review highlights the relationships between the disciplines of bloodstain pattern analysis (BPA) in forensics and that of fluid dynamics (FD) in the physical sciences. In both the BPA and FD communities, scientists study the motion and phase change of a liquid in contact with air, or with other liquids or solids. Five aspects of BPA related to FD are discussed: the physical forces driving the motion of blood as a fluid; the generation of the drops; their flight in the air; their impact on solid or liquid surfaces; and the production of stains. For each of these topics, the relevant literature from the BPA community and from the FD community is reviewed. Comments are provided on opportunities for joint BPA and FD research, and on the development of novel FD-based tools and methods for BPA. Also, the use of dimensionless numbers is proposed to inform BPA analyses.",
"title": ""
},
{
"docid": "a208f2a2720313479773c00a74b1cbc6",
"text": "I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim’s Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600’000 Wikidata items and properties.",
"title": ""
}
] | scidocsrr |
e3aea73581e42c468cb3c5f58d648ad1 | Reputation and social network analysis in multi-agent systems | [
{
"docid": "8e70aea51194dba675d4c3e88ee6b9ad",
"text": "Trust is central to all transactions and yet economists rarely discuss the notion. It is treated rather as background environment, present whenever called upon, a sort of ever-ready lubricant that permits voluntary participation in production and exchange. In the standard model of a market economy it is taken for granted that consumers meet their budget constraints: they are not allowed to spend more than their wealth. Moreover, they always deliver the goods and services they said they would. But the model is silent on the rectitude of such agents. We are not told if they are persons of honour, conditioned by their upbringing always to meet the obligations they have chosen to undertake, or if there is a background agency which enforces contracts, credibly threatening to mete out punishment if obligations are not fulfilled a punishment sufficiently stiff to deter consumers from ever failing to fulfil them. The same assumptions are made for producers. To be sure, the standard model can be extended to allow for bankruptcy in the face of an uncertain future. One must suppose that there is a special additional loss to becoming bankrupt a loss of honour when honour matters, social and economic ostracism, a term in a debtors’ prison, and so forth. Otherwise, a person may take silly risks or, to make a more subtle point, take insufficient care in managing his affairs, but claim that he ran into genuine bad luck, that it was Mother Nature’s fault and not his own lack of ability or zeal.",
"title": ""
}
] | [
{
"docid": "16c87d75564404d52fc2abac55297931",
"text": "SHADE is an adaptive DE which incorporates success-history based parameter adaptation and one of the state-of-the-art DE algorithms. This paper proposes L-SHADE, which further extends SHADE with Linear Population Size Reduction (LPSR), which continually decreases the population size according to a linear function. We evaluated the performance of L-SHADE on CEC2014 benchmarks and compared its search performance with state-of-the-art DE algorithms, as well as the state-of-the-art restart CMA-ES variants. The experimental results show that L-SHADE is quite competitive with state-of-the-art evolutionary algorithms.",
"title": ""
},
{
"docid": "a33c723760f9870744ab004b693e8904",
"text": "Portfolio analysis of the publication profile of a unit of interest, ranging from individuals, organizations, to a scientific field or interdisciplinary programs, aims to inform analysts and decision makers about the position of the unit, where it has been, and where it may go in a complex adaptive environment. A portfolio analysis may aim to identify the gap between the current position of an organization and a goal that it intends to achieve or identify competencies of multiple institutions. We introduce a new visual analytic method for analyzing, comparing, and contrasting characteristics of publication portfolios. The new method introduces a novel design of dual-map thematic overlays on global maps of science. Each publication portfolio can be added as one layer of dual-map overlays over two related but distinct global maps of science, one for citing journals and the other for cited journals. We demonstrate how the new design facilitates a portfolio analysis in terms of patterns emerging from the distributions of citation threads and the dynamics of trajectories as a function of space and time. We first demonstrate the analysis of portfolios defined on a single source article. Then we contrast publication portfolios of multiple comparable units of interest, namely, colleges in universities, corporate research organizations. We also include examples of overlays of scientific fields. We expect the new method will provide new insights to portfolio analysis.",
"title": ""
},
{
"docid": "d597d4a1c32256b95524876218d963da",
"text": "E-commerce in today's conditions has the highest dependence on network infrastructure of banking. However, when the possibility of communicating with the Banking network is not provided, business activities will suffer. This paper proposes a new approach of digital wallet based on mobile devices without the need to exchange physical money or communicate with banking network. A digital wallet is a software component that allows a user to make an electronic payment in cash (such as a credit card or a digital coin), and hides the low-level details of executing the payment protocol that is used to make the payment. The main features of proposed architecture are secure awareness, fault tolerance, and infrastructure-less protocol.",
"title": ""
},
{
"docid": "17b85b7a5019248c4e43b4f5edc68ffb",
"text": "We establish a new connection between value and policy based reinforcement learning (RL) based on a relationship between softmax temporal value consistency and policy optimality under entropy regularization. Specifically, we show that softmax consistent action values correspond to optimal entropy regularized policy probabilities along any action sequence, regardless of provenance. From this observation, we develop a new RL algorithm, Path Consistency Learning (PCL), that minimizes a notion of soft consistency error along multi-step action sequences extracted from both onand off-policy traces. We examine the behavior of PCL in different scenarios and show that PCL can be interpreted as generalizing both actor-critic and Q-learning algorithms. We subsequently deepen the relationship by showing how a single model can be used to represent both a policy and the corresponding softmax state values, eliminating the need for a separate critic. The experimental evaluation demonstrates that PCL significantly outperforms strong actor-critic and Q-learning baselines across several benchmarks.2",
"title": ""
},
{
"docid": "1a9026e0e8fdcd1fab24661beb9ac400",
"text": "Please check this box if you do not wish your email address to be published Acknowledgments: The authors would like to thank the anonymous reviewers for their valuable comments that have enabled the improvement of manuscript's quality. The authors would also like to acknowledge that the Before that, he served as a Researcher Grade D at the research center CERTH/ITI and at research center NCSR \" Demokritos \". He was also founder and manager of the eGovernment Unit at Archetypon SA, an international IT company. He holds a Diploma in Electrical Engineering from the National Technical University of Athens, Greece, and an MSc and PhD from Brunel University, UK. During the past years he has initiated and managed several research projects (e.g. Automation. He has about 200 research publications in the areas of software modeling and development for the domains of eGovernment, eBusiness, eLearning, eManufacturing etc. Structured Abstract: Purpose The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach The scientific literature was systematically reviewed to identify relevant empirical studies. These studies were analyzed and synthesized in the form of a proposed conceptual framework, which was thereafter applied to further analyze this literature, hence gaining new insights into the field. Findings The proposed framework reveals that all relevant studies can be decomposed into a small number of steps, and different approaches can be followed in each step. The application of the framework resulted in interesting findings. For example, most studies support SM predictive power, however more than one-third of these studies infer predictive power without employing predictive analytics. In addition, analysis suggests that there is a clear need for more advanced sentiment analysis methods as well as methods for identifying search terms for collection and filtering of raw SM data. Value The proposed framework enables researchers to classify and evaluate existing studies, to design scientifically rigorous new studies, and to identify the field's weaknesses, hence proposing future research directions. Purpose: The purpose of this article is to consolidate existing knowledge and provide a deeper understanding of the use of Social Media (SM) data for predictions in various areas, such as disease outbreaks, product sales, stock market volatility, and elections outcome predictions. Design/methodology/approach: The scientific literature was systematically reviewed …",
"title": ""
},
{
"docid": "f6669d0b53dd0ca789219874d35bf14e",
"text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.",
"title": ""
},
{
"docid": "28f1b7635b777cf278cc8d53a5afafb9",
"text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.",
"title": ""
},
{
"docid": "cf6816d0a38296a3dc2c04894a102283",
"text": "This paper presents a high-efficiency positive buck- boost converter with mode-select circuits and feed-forward techniques. Four power transistors produce more conduction and more switching losses when the positive buck-boost converter operates in buck-boost mode. Utilizing the mode-select circuit, the proposed converter can decrease the loss of switches and let the positive buck-boost converter operate in buck, buck-boost, or boost mode. By adding feed-forward techniques, the proposed converter can improve transient response when the supply voltages are changed. The proposed converter has been fabricated with TSMC 0.35-μm CMOS 2P4M processes. The total chip area is 2.59 × 2.74 mm2 (with PADs), the output voltage is 3.3 V, and the regulated supply voltage range is from 2.5-5 V. Its switching frequency is 500 kHz and the maximum power efficiency is 91.6% as the load current equals 150 mA.",
"title": ""
},
{
"docid": "0f4ac688367d3ea43643472b7d75ffc9",
"text": "Many non-photorealistic rendering techniques exist to produce artistic ef fe ts from given images. Inspired by various artists, interesting effects can be produced b y using a minimal rendering, where the minimum refers to the number of tones as well as the nu mber and complexity of the primitives used for rendering. Our method is based on va rious computer vision techniques, and uses a combination of refined lines and blocks (po tentially simplified), as well as a small number of tones, to produce abstracted artistic re ndering with sufficient elements from the original image. We also considered a variety of methods to produce different artistic styles, such as colour and two-tone drawing s, and use semantic information to improve renderings for faces. By changing some intuitive par ameters a wide range of visually pleasing results can be produced. Our method is fully automatic. We demonstrate the effectiveness of our method with extensive experiments and a user study.",
"title": ""
},
{
"docid": "8961d0bd4ba45849bd8fa5c53c0cfb1d",
"text": "SUMMARY\nThe program MODELTEST uses log likelihood scores to establish the model of DNA evolution that best fits the data.\n\n\nAVAILABILITY\nThe MODELTEST package, including the source code and some documentation is available at http://bioag.byu. edu/zoology/crandall_lab/modeltest.html.",
"title": ""
},
{
"docid": "d00691959822087a1bddc3b411d27239",
"text": "We consider the lattice Boltzmann method for immiscible multiphase flow simulations. Classical lattice Boltzmann methods for this problem, e.g. the colour gradient method or the free energy approach, can only be applied when density and viscosity ratios are small. Moreover, they use additional fields defined on the whole domain to describe the different phases and model phase separation by special interactions at each node. In contrast, our approach simulates the flow using a single field and separates the fluid phases by a free moving interface. The scheme is based on the lattice Boltzmann method and uses the level set method to compute the evolution of the interface. To couple the fluid phases, we develop new boundary conditions which realise the macroscopic jump conditions at the interface and incorporate surface tension in the lattice Boltzmann framework. Various simulations are presented to validate the numerical scheme, e.g. two-phase channel flows, the Young-Laplace law for a bubble and viscous fingering in a Hele-Shaw cell. The results show that the method is feasible over a wide range of density and viscosity differences.",
"title": ""
},
{
"docid": "704df193801e9cd282c0ce2f8a72916b",
"text": "We present our preliminary work in developing augmented reali ty systems to improve methods for the construction, inspection, and renovatio n of architectural structures. Augmented reality systems add virtual computer-generated mate rial to the surrounding physical world. Our augmented reality systems use see-through headworn displays to overlay graphics and sounds on a person’s naturally occurring vision and hearing. As the person moves about, the position and orientation of his or her head is tracked, allowing the overlaid material to remai n tied to the physical world. We describe an experimental augmented reality system tha t shows the location of columns behind a finished wall, the location of re-bar s inside one of the columns, and a structural analysis of the column. We also discuss our pre liminary work in developing an augmented reality system for improving the constructio n of spaceframes. Potential uses of more advanced augmented reality systems are presented.",
"title": ""
},
{
"docid": "c8daa2571cd7808664d3dbe775cf60ab",
"text": "OBJECTIVE\nTo review the research addressing the relationship of childhood trauma to psychosis and schizophrenia, and to discuss the theoretical and clinical implications.\n\n\nMETHOD\nRelevant studies and previous review papers were identified via computer literature searches.\n\n\nRESULTS\nSymptoms considered indicative of psychosis and schizophrenia, particularly hallucinations, are at least as strongly related to childhood abuse and neglect as many other mental health problems. Recent large-scale general population studies indicate the relationship is a causal one, with a dose-effect.\n\n\nCONCLUSION\nSeveral psychological and biological mechanisms by which childhood trauma increases risk for psychosis merit attention. Integration of these different levels of analysis may stimulate a more genuinely integrated bio-psycho-social model of psychosis than currently prevails. Clinical implications include the need for staff training in asking about abuse and the need to offer appropriate psychosocial treatments to patients who have been abused or neglected as children. Prevention issues are also identified.",
"title": ""
},
{
"docid": "5752868bb14f434ce281733f2ecf84f8",
"text": "Tessellation in fundus is not only a visible feature for aged-related and myopic maculopathy but also confuse retinal vessel segmentation. The detection of tessellated images is an inevitable processing in retinal image analysis. In this work, we propose a model using convolutional neural network for detecting tessellated images. The input to the model is pre-processed fundus image, and the output indicate whether this photograph has tessellation or not. A database with 12,000 colour retinal images is collected to evaluate the classification performance. The best tessellation classifier achieves accuracy of 97.73% and AUC value of 0.9659 using pretrained GoogLeNet and transfer learning technique.",
"title": ""
},
{
"docid": "1f7bd85c5b28f97565d8b38781e875ab",
"text": "Parental socioeconomic status is among the widely cited factors that has strong association with academic performance of students. Explanatory research design was employed to assess the effects of parents’ socioeconomic status on the academic achievement of students in regional examination. To that end, regional examination result of 538 randomly selected students from thirteen junior secondary schools has been analysed using percentage, independent samples t-tests, Spearman’s rho correlation and one way ANOVA. The results of the analysis revealed that socioeconomic status of parents (particularly educational level and occupational status of parents) has strong association with the academic performance of students. Students from educated and better off families have scored higher result in their regional examination than their counterparts. Being a single parent student and whether parents are living together or not have also a significant impact on the academic performance of students. Parents’ age did not have a significant association with the performance of students.",
"title": ""
},
{
"docid": "6868e3b2432d9914a9b4a4fd2b50b3ee",
"text": "Nutritional deficiencies detection for coffee leaves is a task which is often undertaken manually by experts on the field known as agronomists. The process they follow to carry this task is based on observation of the different characteristics of the coffee leaves while relying on their own experience. Visual fatigue and human error in this empiric approach cause leaves to be incorrectly labeled and thus affecting the quality of the data obtained. In this context, different crowdsourcing approaches can be applied to enhance the quality of the data extracted. These approaches separately propose the use of voting systems, association rule filters and evolutive learning. In this paper, we extend the use of association rule filters and evolutive approach by combining them in a methodology to enhance the quality of the data while guiding the users during the main stages of data extraction tasks. Moreover, our methodology proposes a reward component to engage users and keep them motivated during the crowdsourcing tasks. The extracted dataset by applying our proposed methodology in a case study on Peruvian coffee leaves resulted in 93.33% accuracy with 30 instances collected by 8 experts and evaluated by 2 agronomic engineers with background on coffee leaves. The accuracy of the dataset was higher than independently implementing the evolutive feedback strategy and an empiric approach which resulted in 86.67% and 70% accuracy respectively under the same conditions.",
"title": ""
},
{
"docid": "20f43c14feaf2da1e8999403bf350855",
"text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "a03a67b3442ef08fe378976377e76f76",
"text": "The method of conjugate gradients provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent.",
"title": ""
},
{
"docid": "4584a3a2b0e1cb30ba1976bd564d74b9",
"text": "Deep neural networks (DNNs) have achieved great success, but the applications to mobile devices are limited due to their huge model size and low inference speed. Much effort thus has been devoted to pruning DNNs. Layer-wise neuron pruning methods have shown their effectiveness, which minimize the reconstruction error of linear response with a limited number of neurons in each single layer pruning. In this paper, we propose a new layer-wise neuron pruning approach by minimizing the reconstruction error of nonlinear units, which might be more reasonable since the error before and after activation can change significantly. An iterative optimization procedure combining greedy selection with gradient decent is proposed for single layer pruning. Experimental results on benchmark DNN models show the superiority of the proposed approach. Particularly, for VGGNet, the proposed approach can compress its disk space by 13.6× and bring a speedup of 3.7×; for AlexNet, it can achieve a compression rate of 4.1× and a speedup of 2.2×, respectively.",
"title": ""
},
{
"docid": "f1a0ea0829f44b3ec235074521dc55c3",
"text": "CONTEXT\nWithout detailed evidence of their effectiveness, pedometers have recently become popular as a tool for motivating physical activity.\n\n\nOBJECTIVE\nTo evaluate the association of pedometer use with physical activity and health outcomes among outpatient adults.\n\n\nDATA SOURCES\nEnglish-language articles from MEDLINE, EMBASE, Sport Discus, PsychINFO, Cochrane Library, Thompson Scientific (formerly known as Thompson ISI), and ERIC (1966-2007); bibliographies of retrieved articles; and conference proceedings.\n\n\nSTUDY SELECTION\nStudies were eligible for inclusion if they reported an assessment of pedometer use among adult outpatients, reported a change in steps per day, and included more than 5 participants.\n\n\nDATA EXTRACTION AND DATA SYNTHESIS\nTwo investigators independently abstracted data about the intervention; participants; number of steps per day; and presence or absence of obesity, diabetes, hypertension, or hyperlipidemia. Data were pooled using random-effects calculations, and meta-regression was performed.\n\n\nRESULTS\nOur searches identified 2246 citations; 26 studies with a total of 2767 participants met inclusion criteria (8 randomized controlled trials [RCTs] and 18 observational studies). The participants' mean (SD) age was 49 (9) years and 85% were women. The mean intervention duration was 18 weeks. In the RCTs, pedometer users significantly increased their physical activity by 2491 steps per day more than control participants (95% confidence interval [CI], 1098-3885 steps per day, P < .001). Among the observational studies, pedometer users significantly increased their physical activity by 2183 steps per day over baseline (95% CI, 1571-2796 steps per day, P < .0001). Overall, pedometer users increased their physical activity by 26.9% over baseline. An important predictor of increased physical activity was having a step goal such as 10,000 steps per day (P = .001). When data from all studies were combined, pedometer users significantly decreased their body mass index by 0.38 (95% CI, 0.05-0.72; P = .03). This decrease was associated with older age (P = .001) and having a step goal (P = .04). Intervention participants significantly decreased their systolic blood pressure by 3.8 mm Hg (95% CI, 1.7-5.9 mm Hg, P < .001). This decrease was associated with greater baseline systolic blood pressure (P = .009) and change in steps per day (P = .08).\n\n\nCONCLUSIONS\nThe results suggest that the use of a pedometer is associated with significant increases in physical activity and significant decreases in body mass index and blood pressure. Whether these changes are durable over the long term is undetermined.",
"title": ""
}
] | scidocsrr |
ff87137881321554168d6922bafec025 | Benchmarking Database Systems A Systematic Approach | [
{
"docid": "978b1e9b3a5c4c92f265795a944e575d",
"text": "The currently operational (March 1976) version of the INGRES database management system is described. This multiuser system gives a relational view of data, supports two high level nonprocedural data sublanguages, and runs as a collection of user processes on top of the UNIX operating system for Digital Equipment Corporation PDP 11/40, 11/45, and 11/70 computers. Emphasis is on the design decisions and tradeoffs related to (1) structuring the system into processes, (2) embedding one command language in a general purpose programming language, (3) the algorithms implemented to process interactions, (4) the access methods implemented, (5) the concurrency and recovery control currently provided, and (6) the data structures used for system catalogs and the role of the database administrator.\nAlso discussed are (1) support for integrity constraints (which is only partly operational), (2) the not yet supported features concerning views and protection, and (3) future plans concerning the system.",
"title": ""
}
] | [
{
"docid": "e0b85ff6cd78f1640f25215ede3a39e6",
"text": "Grammatical error diagnosis is an important task in natural language processing. This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED. The CGED system can diagnose four types of grammatical errors which are redundant words (R), missing words (M), bad word selection (S) and disordered words (W). We treat the CGED task as a sequence labeling task and describe three models, including a CRFbased model, an LSTM-based model and an ensemble model using stacking. We also show in details how we build and train the models. Evaluation includes three levels, which are detection level, identification level and position level. On the CGED-HSK dataset of NLP-TEA-3 shared task, our system presents the best F1-scores in all the three levels and also the best recall in the last two levels.",
"title": ""
},
{
"docid": "6af138889b6eaeaa6ea8ee4edd7f8aaf",
"text": "University of Leipzig, Natural Language Processing Department, Johannisgasse 26, 04081 Leipzig, Germany [email protected], {quasthoff, heyer}@informatik.uni-leipzig.de Abstract SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative sentiment bearing words weighted within the interval of [−1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS (v1.8b) contains 1,650 negative and 1,818 positive words, which sum up to 16,406 positive and 16,328 negative word forms, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. The present work describes the resource’s structure, the three sources utilised to assemble it and the semi-supervised method incorporated to weight the strength of its entries. Furthermore the resource’s contents are extensively evaluated using a German-language evaluation set we constructed. The evaluation set is verified being reliable and its shown that SentiWS provides a beneficial lexical resource for German-language sentiment analysis related tasks to build on.",
"title": ""
},
{
"docid": "77c18ca76341a691b7c0093a88583c82",
"text": "Biometric characteristics can be utilized in order to enable reliable and robust-to-impostor-attacks person recognition. Speaker recognition technology is commonly utilized in various systems enabling natural human computer interaction. The majority of the speaker recognition systems rely only on acoustic information, ignoring the visual modality. However, visual information conveys correlated and complimentary information to the audio information and its integration into a recognition system can potentially increase the system's performance, especially in the presence of adverse acoustic conditions. Acoustic and visual biometric signals, such as the person's voice and face, can be obtained using unobtrusive and user-friendly procedures and low-cost sensors. Developing unobtrusive biometric systems makes biometric technology more socially acceptable and accelerates its integration into every day life. In this paper, we describe the main components of audio-visual biometric systems, review existing systems and their performance, and discuss future research and development directions in this area",
"title": ""
},
{
"docid": "a78913db9636369b2d7d8cb5e5a6a351",
"text": "We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics.",
"title": ""
},
{
"docid": "f264d5b90dfb774e9ec2ad055c4ebe62",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "f35d164bd1b19f984b10468c41f149e3",
"text": "Recent technological advancements have led to a deluge of data from distinctive domains (e.g., health care and scientific sensors, user-generated data, Internet and financial companies, and supply chain systems) over the past two decades. The term big data was coined to capture the meaning of this emerging trend. In addition to its sheer volume, big data also exhibits other unique characteristics as compared with traditional data. For instance, big data is commonly unstructured and require more real-time analysis. This development calls for new system architectures for data acquisition, transmission, storage, and large-scale data processing mechanisms. In this paper, we present a literature survey and system tutorial for big data analytics platforms, aiming to provide an overall picture for nonexpert readers and instill a do-it-yourself spirit for advanced audiences to customize their own big-data solutions. First, we present the definition of big data and discuss big data challenges. Next, we present a systematic framework to decompose big data systems into four sequential modules, namely data generation, data acquisition, data storage, and data analytics. These four modules form a big data value chain. Following that, we present a detailed survey of numerous approaches and mechanisms from research and industry communities. In addition, we present the prevalent Hadoop framework for addressing big data challenges. Finally, we outline several evaluation benchmarks and potential research directions for big data systems.",
"title": ""
},
{
"docid": "2baf55123171c6e2110b19b1583c3d17",
"text": "A novel three-way power divider using tapered lines is presented. It has several strip resistors which are formed like a ladder between the tapered-line conductors to achieve a good output isolation. The equivalent circuits are derived with the EE/OE/OO-mode analysis based on the fundamental propagation modes in three-conductor coupled lines. The fabricated three-way power divider shows a broadband performance in input return loss which is greater than 20 dB over a 3:1 bandwidth in the C-Ku bands.",
"title": ""
},
{
"docid": "86497dcdfd05162804091a3368176ad5",
"text": "This paper reviews the current status and implementation of battery chargers, charging power levels and infrastructure for plug-in electric vehicles and hybrids. Battery performance depends both on types and design of the batteries, and on charger characteristics and charging infrastructure. Charger systems are categorized into off-board and on-board types with unidirectional or bidirectional power flow. Unidirectional charging limits hardware requirements and simplifies interconnection issues. Bidirectional charging supports battery energy injection back to the grid. Typical onboard chargers restrict the power because of weight, space and cost constraints. They can be integrated with the electric drive for avoiding these problems. The availability of a charging infrastructure reduces on-board energy storage requirements and costs. On-board charger systems can be conductive or inductive. While conductive chargers use direct contact, inductive chargers transfer power magnetically. An off-board charger can be designed for high charging rates and is less constrained by size and weight. Level 1 (convenience), Level 2 (primary), and Level 3 (fast) power levels are discussed. These system configurations vary from country to country depending on the source and plug capacity standards. Various power level chargers and infrastructure configurations are presented, compared, and evaluated based on amount of power, charging time and location, cost, equipment, effect on the grid, and other factors.",
"title": ""
},
{
"docid": "e43242ed17a0b2fa9fca421179135ce1",
"text": "Direct digital synthesis (DDS) is a useful tool for generating periodic waveforms. In this two-part article, the basic idea of this synthesis technique is presented and then focused on the quality of the sinewave a DDS can create, introducing the SFDR quality parameter. Next effective methods to increase the SFDR are presented through sinewave approximations, hardware schemes such as dithering and noise shaping, and an extensive list of reference. When the desired output is a digital signal, the signal's characteristics can be accurately predicted using the formulas given in this article. When the desired output is an analog signal, the reader should keep in mind that the performance of the DDS is eventually limited by the performance of the digital-to-analog converter and the follow-on analog filter. Hoping that this article would incite engineers to use DDS either in integrated circuits DDS or software-implemented DDS. From the author's experience, this technique has proven valuable when frequency resolution is the challenge, particularly when using low-cost microcontrollers.",
"title": ""
},
{
"docid": "b8d8785968023a38d742abc15c01ee28",
"text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.",
"title": ""
},
{
"docid": "3a95be7cbc37f20a6c41b84f78013263",
"text": "We demonstrate a simple strategy to cope with missing data in sequential inputs, addressing the task of multilabel classification of diagnoses given clinical time series. Collected from the pediatric intensive care unit (PICU) at Children’s Hospital Los Angeles, our data consists of multivariate time series of observations. The measurements are irregularly spaced, leading to missingness patterns in temporally discretized sequences. While these artifacts are typically handled by imputation, we achieve superior predictive performance by treating the artifacts as features. Unlike linear models, recurrent neural networks can realize this improvement using only simple binary indicators of missingness. For linear models, we show an alternative strategy to capture this signal. Training models on missingness patterns only, we show that for some diseases, what tests are run can as predictive as the results themselves.",
"title": ""
},
{
"docid": "27f773226c458febb313fd48b59c7222",
"text": "This thesis presents extensions to the local binary pattern (LBP) texture analysis operator. The operator is defined as a gray-scale invariant texture measure, derived from a general definition of texture in a local neighborhood. It is made invariant against the rotation of the image domain, and supplemented with a rotation invariant measure of local contrast. The LBP is proposed as a unifying texture model that describes the formation of a texture with micro-textons and their statistical placement rules. The basic LBP is extended to facilitate the analysis of textures with multiple scales by combining neighborhoods with different sizes. The possible instability in sparse sampling is addressed with Gaussian low-pass filtering, which seems to be somewhat helpful. Cellular automata are used as texture features, presumably for the first time ever. With a straightforward inversion algorithm, arbitrarily large binary neighborhoods are encoded with an eight-bit cellular automaton rule, resulting in a very compact multi-scale texture descriptor. The performance of the new operator is shown in an experiment involving textures with multiple spatial scales. An opponent-color version of the LBP is introduced and applied to color textures. Good results are obtained in static illumination conditions. An empirical study with different color and texture measures however shows that color and texture should be treated separately. A number of different applications of the LBP operator are presented, emphasizing real-time issues. A very fast software implementation of the operator is introduced, and different ways of speeding up classification are evaluated. The operator is successfully applied to industrial visual inspection applications and to image retrieval.",
"title": ""
},
{
"docid": "aa13ec272d10ba36ef0d7e530e5dbb39",
"text": "Markov chain Monte Carlo (MCMC) methods are often deemed far too computationally intensive to be of any practical use for large datasets. This paper describes a methodology that aims to scale up the Metropolis-Hastings (MH) algorithm in this context. We propose an approximate implementation of the accept/reject step of MH that only requires evaluating the likelihood of a random subset of the data, yet is guaranteed to coincide with the accept/reject step based on the full dataset with a probability superior to a user-specified tolerance level. This adaptive subsampling technique is an alternative to the recent approach developed in (Korattikara et al., 2014), and it allows us to establish rigorously that the resulting approximate MH algorithm samples from a perturbed version of the target distribution of interest, whose total variation distance to this very target is controlled explicitly. We explore the benefits and limitations of this scheme on several examples.",
"title": ""
},
{
"docid": "d2086d9c52ca9d4779a2e5070f9f3009",
"text": "Though action recognition based on complete videos has achieved great success recently, action prediction remains a challenging task as the information provided by partial videos is not discriminative enough for classifying actions. In this paper, we propose a Deep Residual Feature Learning (DeepRFL) framework to explore more discriminative information from partial videos, achieving similar representations as those of complete videos. The proposed method is based on residual learning, which captures the salient differences between partial videos and their corresponding full videos. The partial videos can attain the missing information by learning from features of complete videos and thus improve the discriminative power. Moreover, our model can be trained efficiently in an end-to-end fashion. Extensive evaluations on the challenging UCF101 and HMDB51 datasets demonstrate that the proposed method outperforms state-of-the-art results.",
"title": ""
},
{
"docid": "512bd1e06d0ce9c920382e1f0843ea33",
"text": "— Diagnosis of the Parkinson disease through machine learning approache provides better understanding from PD dataset in the present decade. Orange v2.0b and weka v3.4.10 has been used in the present experimentation for the statistical analysis, classification, Evaluation and unsupervised learning methods. Voice dataset for Parkinson disease has been retrieved from UCI Machine learning repository from Center for Machine Learning and Intelligent Systems. The dataset contains name, attributes. The parallel coordinates shows higher variation in Parkinson disease dataset. SVM has shown good accuracy (88.9%) compared to Majority and k-NN algorithms. Classification algorithm like Random Forest has shown good accuracy (90.26) and Naïve Bayes has shown least accuracy (69.23. Higher number of clusters in healthy dataset in Fo and less number in diseased data has been predicted by Hierarchal clustering and SOM.",
"title": ""
},
{
"docid": "e7a6bb8f63e35f3fb0c60bdc26817e03",
"text": "A simple mechanism is presented, based on ant-like agents, for routing and load balancing in telecommunications networks, following the initial works of Appleby and Stewart (1994) and Schoonderwoerd et al. (1997). In the present work, agents are very similar to those proposed by Schoonderwoerd et al. (1997), but are supplemented with a simplified dynamic programming capability, initially experimented by Guérin (1997) with more complex agents, which is shown to significantly improve the network's relaxation and its response to perturbations. Topic area: Intelligent agents and network management",
"title": ""
},
{
"docid": "491f49dd73578b751f8f3e9afe64341e",
"text": "Multitask learning often improves system performance for morphosyntactic and semantic tagging tasks. However, the question of when and why this is the case has yet to be answered satisfactorily. Although previous work has hypothesised that this is linked to the label distributions of the auxiliary task, we argue that this is not sufficient. We show that information-theoretic measures which consider the joint label distributions of the main and auxiliary tasks offer far more explanatory value. Our findings are empirically supported by experiments for morphosyntactic tasks on 39 languages, and are in line with findings in the literature for several semantic tasks.",
"title": ""
},
{
"docid": "1b7f31c73dd99b6957d8b5c85240b060",
"text": "We propose a novel approach to address the Simultaneous Detection and Segmentation problem introduced in [8]. Using the hierarchical structures first presented in [1] we use an efficient and accurate procedure that exploits the hierarchy feature information using Locality Sensitive Hashing. We build on recent work that utilizes convolutional neural networks to detect bounding boxes in an image (Faster R-CNN [11]) and then use the top similar hierarchical region that best fits each bounding box after hashing, we call this approach HashBox. We then refine our final segmentation results by automatic hierarchy pruning. HashBox introduces a train-free alternative to Hypercolumns [7]. We conduct extensive experiments on Pascal VOC 2012 segmentation dataset, showing that HashBox gives competitive state-of-the-art object segmentations.",
"title": ""
},
{
"docid": "b31676e958e8345132780499e5dd968d",
"text": "Following triggered corporate bankruptcies, an increasing number of prediction models have emerged since 1960s. This study provides a critical analysis of methodologies and empirical findings of applications of these models across 10 different countries. The study’s empirical exercise finds that predictive accuracies of different corporate bankruptcy prediction models are, generally, comparable. Artificially Intelligent Expert System (AIES) models perform marginally better than statistical and theoretical models. Overall, use of Multiple Discriminant Analysis (MDA) dominates the research followed by logit models. Study deduces useful observations and recommendations for future research in this field. JEL classification: G33; C49; C88",
"title": ""
},
{
"docid": "d882657765647d9e84b8ad729a079833",
"text": "Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multiview learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.",
"title": ""
}
] | scidocsrr |
5e06328e2a74b35fe5b70d5bffb0c06c | Clone Detection Using Abstract Syntax Suffix Trees | [
{
"docid": "a17052726cbf3239c3f516b51af66c75",
"text": "Source code duplication occurs frequently within large software systems. Pieces of source code, functions, and data types are often duplicated in part, or in whole, for a variety of reasons. Programmers may simply be reusing a piece of code via copy and paste or they may be “reinventing the wheel”. Previous research on the detection of clones is mainly focused on identifying pieces of code with similar (or nearly similar) structure. Our approach is to examine the source code text (comments and identifiers) and identify implementations of similar high-level concepts (e.g., abstract data types). The approach uses an information retrieval technique (i.e., latent semantic indexing) to statically analyze the software system and determine semantic similarities between source code documents (i.e., functions, files, or code segments). These similarity measures are used to drive the clone detection process. The intention of our approach is to enhance and augment existing clone detection methods that are based on structural analysis. This synergistic use of methods will improve the quality of clone detection. A set of experiments is presented that demonstrate the usage of semantic similarity measure to identify clones within a version of NCSA Mosaic.",
"title": ""
},
{
"docid": "b09eedfc1b27d5666846c18423d1ad54",
"text": "Recent years have seen many significant advances in program comprehension and software maintenance automation technology. In spite of the enormous potential savings in software maintenance costs, for the most part adoption of these ideas in industry remains at the experimental prototype stage. In this paper I explore some of the practical reasons for industrial resistance to adoption of software maintenance automation. Based on the experience of six years of software maintenance automation services to the financial industry involving more than 4.5 Gloc of code at Legasys Corporation, I discuss some of the social, technical and business realities that lie at the root of this resistance, outline various Legasys attempts overcome these barriers, and suggest some approaches to software maintenance automation that may lead to higher levels of industrial acceptance in the future.",
"title": ""
}
] | [
{
"docid": "dd634fe7f5bfb5d08d0230c3e64220a4",
"text": "Living in an oxygenated environment has required the evolution of effective cellular strategies to detect and detoxify metabolites of molecular oxygen known as reactive oxygen species. Here we review evidence that the appropriate and inappropriate production of oxidants, together with the ability of organisms to respond to oxidative stress, is intricately connected to ageing and life span.",
"title": ""
},
{
"docid": "df96263c86a36ed30e8a074354b09239",
"text": "We propose three iterative superimposed-pilot based channel estimators for Orthogonal Frequency Division Multiplexing (OFDM) systems. Two are approximate maximum-likelihood, derived by using a Taylor expansion of the conditional probability density function of the received signal or by approximating the OFDM time signal as Gaussian, and one is minimum-mean square error. The complexity per iteration of these estimators is given by approximately O(NL2), O(N3) and O(NL), where N is the number of OFDM subcarriers and L is the channel length (time). Two direct (non-iterative) data detectors are also derived by averaging the log likelihood function over the channel statistics. These detectors require minimising the cost metric in an integer space, and we suggest the use of the sphere decoder for them. The Cramér--Rao bound for superimposed pilot based channel estimation is derived, and this bound is achieved by the proposed estimators. The optimal pilot placement is shown to be the equally spaced distribution of pilots. The bit error rate of the proposed estimators is simulated for N = 32 OFDM system. Our estimators perform fairly close to a separated training scheme, but without any loss of spectral efficiency. Copyright © 2011 John Wiley & Sons, Ltd. *Correspondence Chintha Tellambura, Department of Electrical and Computer Engineering, University Alberta, Edmonton, Alberta, Canada T6G 2C5. E-mail: [email protected] Received 20 July 2009; Revised 23 July 2010; Accepted 13 October 2010",
"title": ""
},
{
"docid": "d4ac0d6890cc89e2525b9537376cce39",
"text": "Unsupervised over-segmentation of an image into regions of perceptually similar pixels, known as super pixels, is a widely used preprocessing step in segmentation algorithms. Super pixel methods reduce the number of regions that must be considered later by more computationally expensive algorithms, with a minimal loss of information. Nevertheless, as some information is inevitably lost, it is vital that super pixels not cross object boundaries, as such errors will propagate through later steps. Existing methods make use of projected color or depth information, but do not consider three dimensional geometric relationships between observed data points which can be used to prevent super pixels from crossing regions of empty space. We propose a novel over-segmentation algorithm which uses voxel relationships to produce over-segmentations which are fully consistent with the spatial geometry of the scene in three dimensional, rather than projective, space. Enforcing the constraint that segmented regions must have spatial connectivity prevents label flow across semantic object boundaries which might otherwise be violated. Additionally, as the algorithm works directly in 3D space, observations from several calibrated RGB+D cameras can be segmented jointly. Experiments on a large data set of human annotated RGB+D images demonstrate a significant reduction in occurrence of clusters crossing object boundaries, while maintaining speeds comparable to state-of-the-art 2D methods.",
"title": ""
},
{
"docid": "95efc564448b3ec74842d047f94cb779",
"text": "Over the past 25 years or so there has been much interest in the use of digital pre-distortion (DPD) techniques for the linearization of RF and microwave power amplifiers. In this paper, we describe the important system and hardware requirements for the four main subsystems found in the DPD linearized transmitter: RF/analog, data converters, digital signal processing, and the DPD architecture and algorithms, and illustrate how the overall DPD system architecture is influenced by the design choices that may be made in each of these subsystems. We shall also consider the challenges presented to future applications of DPD systems for wireless communications, such as higher operating frequencies, wider signal bandwidths, greater spectral efficiency signals, resulting in higher peak-to-average power ratios, multiband and multimode operation, lower power consumption requirements, faster adaption, and how these affect the system design choices.",
"title": ""
},
{
"docid": "ed0342748fff5c1ced69700cfd922884",
"text": "Many applications of histograms for the purposes of image processing are well known. However, applying this process to the transform domain by way of a transform coefficient histogram has not yet been fully explored. This paper proposes three methods of image enhancement: a) logarithmic transform histogram matching, b) logarithmic transform histogram shifting, and c) logarithmic transform histogram shaping using Gaussian distributions. They are based on the properties of the logarithmic transform domain histogram and histogram equalization. The presented algorithms use the fact that the relationship between stimulus and perception is logarithmic and afford a marriage between enhancement qualities and computational efficiency. A human visual system-based quantitative measurement of image contrast improvement is also defined. This helps choose the best parameters and transform for each enhancement. A number of experimental results are presented to illustrate the performance of the proposed algorithms",
"title": ""
},
{
"docid": "6c5c6e201e2ae886908aff554866b9ed",
"text": "HDBSCAN: Hierarchical Density-Based Spatial Clustering of Applications with Noise (Campello, Moulavi, and Sander 2013), (Campello et al. 2015). Performs DBSCAN over varying epsilon values and integrates the result to find a clustering that gives the best stability over epsilon. This allows HDBSCAN to find clusters of varying densities (unlike DBSCAN), and be more robust to parameter selection. The library also includes support for Robust Single Linkage clustering (Chaudhuri et al. 2014), (Chaudhuri and Dasgupta 2010), GLOSH outlier detection (Campello et al. 2015), and tools for visualizing and exploring cluster structures. Finally support for prediction and soft clustering is also available.",
"title": ""
},
{
"docid": "827c9d65c2c3a2a39d07c9df7a21cfe2",
"text": "A worldwide movement in advanced manufacturing countries is seeking to reinvigorate (and revolutionize) the industrial and manufacturing core competencies with the use of the latest advances in information and communications technology. Visual computing plays an important role as the \"glue factor\" in complete solutions. This article positions visual computing in its intrinsic crucial role for Industrie 4.0 and provides a general, broad overview and points out specific directions and scenarios for future research.",
"title": ""
},
{
"docid": "1f3e600ce5be2a55234c11e19e11cb67",
"text": "In this paper, we propose a noise robust speech recognition system built using generalized distillation framework. It is assumed that during training, in addition to the training data, some kind of ”privileged” information is available and can be used to guide the training process. This allows to obtain a system which at test time outperforms those built on regular training data alone. In the case of noisy speech recognition task, the privileged information is obtained from a model, called ”teacher”, trained on clean speech only. The regular model, called ”student”, is trained on noisy utterances and uses teacher’s output for the corresponding clean utterances. Thus, for this framework a parallel clean/noisy speech data are required. We experimented on the Aurora2 database which provides such kind of data. Our system uses hybrid DNN-HMM acoustic model where neural networks provide HMM state probabilities during decoding. The teacher DNN is trained on the clean data, while the student DNN is trained using multi-condition (various SNRs) data. The student DNN loss function combines the targets obtained from forced alignment of the training data and the outputs of the teacher DNN when fed with the corresponding clean features. Experimental results clearly show that distillation framework is effective and allows to achieve significant reduction in the word error rate.",
"title": ""
},
{
"docid": "4c5d12c3b1254c83819eac53dd57ce40",
"text": "traditional topic detection method can not be applied to the microblog topic detection directly, because the microblog text is a kind of the short, fractional and grass-roots text. In order to detect the hot topic in the microblog text effectively, we propose a microblog topic detection method based on the combination of the latent semantic analysis and the structural property. According to the dialogic property of the microblog, our proposed method firstly creates semantic space based on the replies to the thread, with the aim to solve the data sparseness problem; secondly, create the microblog model based on the latent semantic analysis; finally, propose a semantic computation method combined with the time information. We then adopt the agglomerative hierarchical clustering method as the microblog topic detection method. Experimental results show that our proposed methods improve the performances of the microblog topic detection greatly.",
"title": ""
},
{
"docid": "a31358ffda425f8e3f7fd15646d04417",
"text": "We elaborate the design and simulation of a planar antenna that is suitable for CubeSat picosatellites. The antenna operates at 436 MHz and its main features are miniature size and the built-in capability to produce circular polarization. The miniaturization procedure is given in detail, and the electrical performance of this small antenna is documented. Two main miniaturization techniques have been applied, i.e. dielectric loading and distortion of the current path. We have added an extra degree of freedom to the latter. The radiator is integrated with the chassis of the picosatellite and, at the same time, operates at the lower end of the UHF spectrum. In terms of electrical size, the structure presented herein is one of the smallest antennas that have been proposed for small satellites. Despite its small electrical size, the antenna maintains acceptable efficiency and gain performance in the band of interest.",
"title": ""
},
{
"docid": "1c66d84dfc8656a23e2a4df60c88ab51",
"text": "Our method aims at reasoning over natural language questions and visual images. Given a natural language question about an image, our model updates the question representation iteratively by selecting image regions relevant to the query and learns to give the correct answer. Our model contains several reasoning layers, exploiting complex visual relations in the visual question answering (VQA) task. The proposed network is end-to-end trainable through back-propagation, where its weights are initialized using pre-trained convolutional neural network (CNN) and gated recurrent unit (GRU). Our method is evaluated on challenging datasets of COCO-QA [19] and VQA [2] and yields state-of-the-art performance.",
"title": ""
},
{
"docid": "ea05a43abee762d4b484b5027e02a03a",
"text": "One essential task in information extraction from the medical corpus is drug name recognition. Compared with text sources come from other domains, the medical text mining poses more challenges, for example, more unstructured text, the fast growing of new terms addition, a wide range of name variation for the same drug, the lack of labeled dataset sources and external knowledge, and the multiple token representations for a single drug name. Although many approaches have been proposed to overwhelm the task, some problems remained with poor F-score performance (less than 0.75). This paper presents a new treatment in data representation techniques to overcome some of those challenges. We propose three data representation techniques based on the characteristics of word distribution and word similarities as a result of word embedding training. The first technique is evaluated with the standard NN model, that is, MLP. The second technique involves two deep network classifiers, that is, DBN and SAE. The third technique represents the sentence as a sequence that is evaluated with a recurrent NN model, that is, LSTM. In extracting the drug name entities, the third technique gives the best F-score performance compared to the state of the art, with its average F-score being 0.8645.",
"title": ""
},
{
"docid": "551e890f5b62ed3fbcaef10101787120",
"text": "Plagiarism detection is a sensitive field of research which has gained lot of interest in the past few years. Although plagiarism detection systems are developed to check text in a variety of languages, they perform better when they are dedicated to check a specific language as they take into account the specificity of the language which leads to better quality results. Query optimization and document reduction constitute two major processing modules which play a major role in optimizing the response time and the results quality of these systems and hence determine their efficiency and effectiveness. This paper proposes an analysis of approaches, an architecture, and a system for detecting plagiarism in Arabic documents. This analysis is particularly focused on the methods and techniques used to detect plagiarism. The proposed web-based architecture exhibits the major processing modules of a plagiarism detection system which are articulated into four layers inside a processing component. The architecture has been used to develop a plagiarism detection system for the Arabic language proposing a set of functions to the user for checking a text and analyzing the results through a well-designed graphical user interface. Subject Categories and Descriptors [H.3.1 Content Analysis and Indexing]: Linguistic processing; [I.2 Artificial Intelligencd]; Natural language interfaces: [I.2.7 Natural Language Processing]; Text Analysis; [I.2.3 Clustering]; Similarity Measures General Terms: Text Analysis, Arabic Language Processing, Similarity Detection",
"title": ""
},
{
"docid": "cdc3b46933db0c88f482ded1dcdff9e6",
"text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.",
"title": ""
},
{
"docid": "e0ee22a0df1c13511909cb5f7d2b4d82",
"text": "Growing use of the Internet as a major means of communication has led to the formation of cyber-communities, which have become increasingly appealing to terrorist groups due to the unregulated nature of Internet communication. Online communities enable violent extremists to increase recruitment by allowing them to build personal relationships with a worldwide audience capable of accessing uncensored content. This article presents methods for identifying the recruitment activities of violent groups within extremist social media websites. Specifically, these methods apply known techniques within supervised learning and natural language processing to the untested task of automatically identifying forum posts intended to recruit new violent extremist members. We used data from the western jihadist website Ansar AlJihad Network, which was compiled by the University of Arizona’s Dark Web Project. Multiple judges manually annotated a sample of these data, marking 192 randomly sampled posts as recruiting (Yes) or non-recruiting (No). We observed significant agreement between the judges’ labels; Cohen’s κ=(0.5,0.9) at p=0.01. We tested the feasibility of using naive Bayes models, logistic regression, classification trees, boosting, and support vector machines (SVM) to classify the forum posts. Evaluation with receiver operating characteristic (ROC) curves shows that our SVM classifier achieves an 89% area under the curve (AUC), a significant improvement over the 63% AUC performance achieved by our simplest naive Bayes model (Tukey’s test at p=0.05). To our knowledge, this is the first result reported on this task, and our analysis indicates that automatic detection of online terrorist recruitment is a feasible task. We also identify a number of important areas of future work including classifying non-English posts and measuring how recruitment posts and current events change membership numbers over time.",
"title": ""
},
{
"docid": "9b32c1ea81eb8d8eb3675c577cc0e2fc",
"text": "Users' addiction to online social networks is discovered to be highly correlated with their social connections in the networks. Dense social connections can effectively help online social networks retain their active users and improve the social network services. Therefore, it is of great importance to make a good prediction of the social links among users. Meanwhile, to enjoy more social network services, users nowadays are usually involved in multiple online social networks simultaneously. Formally, the social networks which share a number of common users are defined as the \"aligned networks\".With the information transferred from multiple aligned social networks, we can gain a more comprehensive knowledge about the social preferences of users in the pre-specified target network, which will benefit the social link prediction task greatly. However, when transferring the knowledge from other aligned source networks to the target network, there usually exists a shift in information distribution between different networks, namely domain difference. In this paper, we study the social link prediction problem of the target network, which is aligned with multiple social networks concurrently. To accommodate the domain difference issue, we project the features extracted for links from different aligned networks into a shared lower-dimensional feature space. Moreover, users in social networks usually tend to form communities and would only connect to a small number of users. Thus, the target network structure has both the low-rank and sparse properties. We propose a novel optimization framework, SLAMPRED, to combine both these two properties aforementioned of the target network and the information of multiple aligned networks with nice domain adaptations. Since the objective function is a linear combination of convex and concave functions involving nondifferentiable regularizers, we propose a novel optimization method to iteratively solve it. Extensive experiments have been done on real-world aligned social networks, and the experimental results demonstrate the effectiveness of the proposed model.",
"title": ""
},
{
"docid": "91f718a69532c4193d5e06bf1ea19fd3",
"text": "Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.\n Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain. libFM is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool libFM.",
"title": ""
},
{
"docid": "48966a0436405a6656feea3ce17e87c3",
"text": "Complex regional pain syndrome (CRPS) is a chronic, intensified localized pain condition that can affect children and adolescents as well as adults, but is more common among adolescent girls. Symptoms include limb pain; allodynia; hyperalgesia; swelling and/or changes in skin color of the affected limb; dry, mottled skin; hyperhidrosis and trophic changes of the nails and hair. The exact mechanism of CRPS is unknown, although several different mechanisms have been suggested. The diagnosis is clinical, with the aid of the adult criteria for CRPS. Standard care consists of a multidisciplinary approach with the implementation of intensive physical therapy in conjunction with psychological counseling. Pharmacological treatments may aid in reducing pain in order to allow the patient to participate fully in intensive physiotherapy. The prognosis in pediatric CRPS is favorable.",
"title": ""
},
{
"docid": "b00311730b7b9b4f79cdd7bde5aa84f6",
"text": "While neural networks demonstrate stronger capabilities in pattern recognition nowadays, they are also becoming larger and deeper. As a result, the effort needed to train a network also increases dramatically. In many cases, it is more practical to use a neural network intellectual property (IP) that an IP vendor has already trained. As we do not know about the training process, there can be security threats in the neural IP: the IP vendor (attacker) may embed hidden malicious functionality, i.e neural Trojans, into the neural IP. We show that this is an effective attack and provide three mitigation techniques: input anomaly detection, re-training, and input preprocessing. All the techniques are proven effective. The input anomaly detection approach is able to detect 99.8% of Trojan triggers although with 12.2% false positive. The re-training approach is able to prevent 94.1% of Trojan triggers from triggering the Trojan although it requires that the neural IP be reconfigurable. In the input preprocessing approach, 90.2% of Trojan triggers are rendered ineffective and no assumption about the neural IP is needed.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
}
] | scidocsrr |
2bcbe92be31315c9fbab39a0684eb566 | Exploiting Temporal and Social Factors for B2B Marketing Campaign Recommendations | [
{
"docid": "13b887760a87bc1db53b16eb4fba2a01",
"text": "Customer preferences for products are drifting over time. Product perception and popularity are constantly changing as new selection emerges. Similarly, customer inclinations are evolving, leading them to ever redefine their taste. Thus, modeling temporal dynamics should be a key when designing recommender systems or general customer preference models. However, this raises unique challenges. Within the eco-system intersecting multiple products and customers, many different characteristics are shifting simultaneously, while many of them influence each other and often those shifts are delicate and associated with a few data instances. This distinguishes the problem from concept drift explorations, where mostly a single concept is tracked. Classical time-window or instance-decay approaches cannot work, as they lose too much signal when discarding data instances. A more sensitive approach is required, which can make better distinctions between transient effects and long term patterns. The paradigm we offer is creating a model tracking the time changing behavior throughout the life span of the data. This allows us to exploit the relevant components of all data instances, while discarding only what is modeled as being irrelevant. Accordingly, we revamp two leading collaborative filtering recommendation approaches. Evaluation is made on a large movie rating dataset by Netflix. Results are encouraging and better than those previously reported on this dataset.",
"title": ""
},
{
"docid": "0b6846c4dd89be21af70b144c93f7a7b",
"text": "Most existing collaborative filtering models only consider the use of user feedback (e.g., ratings) and meta data (e.g., content, demographics). However, in most real world recommender systems, context information, such as time and social networks, are also very important factors that could be considered in order to produce more accurate recommendations. In this work, we address several challenges for the context aware movie recommendation tasks in CAMRa 2010: (1) how to combine multiple heterogeneous forms of user feedback? (2) how to cope with dynamic user and item characteristics? (3) how to capture and utilize social connections among users? For the first challenge, we propose a novel ranking based matrix factorization model to aggregate explicit and implicit user feedback. For the second challenge, we extend this model to a sequential matrix factorization model to enable time-aware parametrization. Finally, we introduce a network regularization function to constrain user parameters based on social connections. To the best of our knowledge, this is the first study that investigates the collective modeling of social and temporal dynamics. Experiments on the CAMRa 2010 dataset demonstrated clear improvements over many baselines.",
"title": ""
},
{
"docid": "51dce19889df3ae51b6c12e3f2a47672",
"text": "Existing recommender systems model user interests and the social influences independently. In reality, user interests may change over time, and as the interests change, new friends may be added while old friends grow apart and the new friendships formed may cause further interests change. This complex interaction requires the joint modeling of user interest and social relationships over time. In this paper, we propose a probabilistic generative model, called Receptiveness over Time Model (RTM), to capture this interaction. We design a Gibbs sampling algorithm to learn the receptiveness and interest distributions among users over time. The results of experiments on a real world dataset demonstrate that RTM-based recommendation outperforms the state-of-the-art recommendation methods. Case studies also show that RTM is able to discover the user interest shift and receptiveness change over time",
"title": ""
},
{
"docid": "8ca30cd6fd335024690837c137f0d1af",
"text": "Non-negative matrix factorization (NMF) is a recently deve loped technique for finding parts-based, linear representations of non-negative data. Although it h as successfully been applied in several applications, it does not always result in parts-based repr esentations. In this paper, we show how explicitly incorporating the notion of ‘sparseness’ impro ves the found decompositions. Additionally, we provide complete MATLAB code both for standard NMF a nd for our extension. Our hope is that this will further the application of these methods to olving novel data-analysis problems.",
"title": ""
}
] | [
{
"docid": "91cb5e59cb11f7d5ba3300cf4f00ff5d",
"text": "Blockchain is a technology uniquely suited to support massive number of transactions and smart contracts within the Internet of Things (IoT) ecosystem, thanks to the decentralized accounting mechanism. In a blockchain network, the states of the accounts are stored and updated by the validator nodes, interconnected in a peer-to-peer fashion. IoT devices are characterized by relatively low computing capabilities and low power consumption, as well as sporadic and low-bandwidth wireless connectivity. An IoT device connects to one or more validator nodes to observe or modify the state of the accounts. In order to interact with the most recent state of accounts, a device needs to be synchronized with the blockchain copy stored by the validator nodes. In this work, we describe general architectures and synchronization protocols that enable synchronization of the IoT endpoints to the blockchain, with different communication costs and security levels. We model and analytically characterize the traffic generated by the synchronization protocols, and also investigate the power consumption and synchronization trade-off via numerical simulations. To the best of our knowledge, this is the first study that rigorously models the role of wireless connectivity in blockchain-powered IoT systems.",
"title": ""
},
{
"docid": "ecc7f7c7c81645e7f2feeb6ac8d8f737",
"text": "Worldwide, there are more than 10 million new cancer cases each year, and cancer is the cause of approximately 12% of all deaths. Given this, a large number of epidemiologic studies have been undertaken to identify potential risk factors for cancer, amongst which the association with trace elements has received considerable attention. Trace elements, such as selenium, zinc, arsenic, cadmium, and nickel, are found naturally in the environment, and human exposure derives from a variety of sources, including air, drinking water, and food. Trace elements are of particular interest given that the levels of exposure to them are potentially modifiable. In this review, we focus largely on the association between each of the trace elements noted above and risk of cancers of the lung, breast, colorectum, prostate, urinary bladder, and stomach. Overall, the evidence currently available appears to support an inverse association between selenium exposure and prostate cancer risk, and possibly also a reduction in risk with respect to lung cancer, although additional prospective studies are needed. There is also limited evidence for an inverse association between zinc and breast cancer, and again, prospective studies are needed to confirm this. Most studies have reported no association between selenium and risk of breast, colorectal, and stomach cancer, and between zinc and prostate cancer risk. There is compelling evidence in support of positive associations between arsenic and risk of both lung and bladder cancers, and between cadmium and lung cancer risk.",
"title": ""
},
{
"docid": "d76246dfee7e2f3813e025ac34ffc354",
"text": "Web usage mining is application of data mining techniques to discover usage patterns from web data, in order to better serve the needs of web based applications. The user access log files present very significant information about a web server. This paper is concerned with the in-depth analysis of Web Log Data of NASA website to find information about a web site, top errors, potential visitors of the site etc. which help system administrator and Web designer to improve their system by determining occurred systems errors, corrupted and broken links by using web using mining. The obtained results of the study will be used in the further development of the web site in order to increase its effectiveness.",
"title": ""
},
{
"docid": "fcceec0849ed7f00a77b45f4297f2218",
"text": "Image retargeting is a process to change the resolution of image while preserve interesting regions and avoid obvious visual distortion. In other words, it focuses on image content more than anything else that applies to filter the useful information for data analysis. Existing approaches may encounter difficulties on the various types of images since most of these approaches only consider 2D features, which are sensitive to the complexity of the contents in images. Researchers are now focusing on the RGB-D information, hoping depth information can help to promote the accuracy. However it is not easy to obtain the RGB-D image we need anywhere and how to utilize depth information is still at the exploration stage. In this paper, instead of using RGB-D data captured by 3D camera, we employ an iterative MRF learning model to predict depth information from a single still image. Then we propose our self-learning 3D saliency model based on the RGB-D data and apply it on the seam carving framework. In seam caving, the self-learning 3D saliency is combined with L1-norm of gradient for better seam searching. Experimental results demonstrate the advantages of our method using RGB-D data in the seam carving framework.",
"title": ""
},
{
"docid": "c158e9421ec0d1265bd625b629e64dc5",
"text": "This paper proposes a gateway framework for in-vehicle networks (IVNs) based on the controller area network (CAN), FlexRay, and Ethernet. The proposed gateway framework is designed to be easy to reuse and verify to reduce development costs and time. The gateway framework can be configured, and its verification environment is automatically generated by a program with a dedicated graphical user interface (GUI). The gateway framework provides state-of-the-art functionalities that include parallel reprogramming, diagnostic routing, network management (NM), dynamic routing update, multiple routing configuration, and security. The proposed gateway framework was developed, and its performance was analyzed and evaluated.",
"title": ""
},
{
"docid": "ccd883caf9a4bc10db6ec67d033b22eb",
"text": "In this paper, a quality model for object-oriented software and an automated metric tool, Reconfigurable Automated Metrics for Object-Oriented Software (RAMOOS) are proposed. The quality model is targeted at the maintainability and reusability aspects of software which can be effectively predicted from the source code. RAMOOS assists users in applying customized quality model during the development of software. In the beginning of adopting RAMOOS, a user may need to use his intuition to select or modify a system-recommended metric model to fit his specific software project needs. If the initial metrics do not meet the expectation, the user can retrive the saved intermediate results and perform further modification to the metric model. The verified model can then be applied to future similar projects.",
"title": ""
},
{
"docid": "2282af5c9f4de5e0de2aae14c0a47840",
"text": "The penetration of smart devices such as mobile phones, tabs has significantly changed the way people communicate. This has led to the growth of usage of social media tools such as twitter, facebook chats for communication. This has led to development of new challenges and perspectives in the language technologies research. Automatic processing of such texts requires us to develop new methodologies. Thus there is great need to develop various automatic systems such as information extraction, retrieval and summarization. Entity recognition is a very important sub task of Information extraction and finds its applications in information retrieval, machine translation and other higher Natural Language Processing (NLP) applications such as co-reference resolution. Some of the main issues in handling of such social media texts are i) Spelling errors ii) Abbreviated new language vocabulary such as “gr8” for great iii) use of symbols such as emoticons/emojis iv) use of meta tags and hash tags v) Code mixing. Entity recognition and extraction has gained increased attention in Indian research community. However there is no benchmark data available where all these systems could be compared on same data for respective languages in this new generation user generated text. Towards this we have organized the Code Mix Entity Extraction in social media text track for Indian languages (CMEE-IL) in the Forum for Information Retrieval Evaluation (FIRE). We present the overview of CMEE-IL 2016 track. This paper describes the corpus created for Hindi-English and Tamil-English. Here we also present overview of the approaches used by the participants. CCS Concepts • Computing methodologies ~ Artificial intelligence • Computing methodologies ~ Natural language processing • Information systems ~ Information extraction",
"title": ""
},
{
"docid": "d69571c1614c3a078d36467d91a09bc6",
"text": "In many species of oviparous reptiles, the first steps of gonadal sex differentiation depend on the incubation temperature of the eggs. Feminization of gonads by exogenous oestrogens at a male-producing temperature and masculinization of gonads by antioestrogens and aromatase inhibitors at a female-producing temperature have irrefutably demonstrated the involvement of oestrogens in ovarian differentiation. Nevertheless, several studies performed on the entire gonad/adrenal/mesonephros complex failed to find differences between male- and female-producing temperatures in oestrogen content, aromatase activity and aromatase gene expression during the thermosensitive period for sex determination. Thus, the key role of aromatase and oestrogens in the first steps of ovarian differentiation has been questioned, and extragonadal organs or tissues, such as adrenal, mesonephros, brain or yolk, were considered as possible targets of temperature and sources of the oestrogens acting on gonadal sex differentiation. In disagreement with this view, experiments and assays carried out on the gonads alone, i.e. separated from the adrenal/mesonephros, provide evidence that the gonads themselves respond to temperature shifts by modifying their sexual differentiation and are the site of aromatase activity and oestrogen synthesis during the thermosensitive period. Oestrogens act locally on both the cortical and the medullary part of the gonad to direct ovarian differentiation. We have concluded that there is no objective reason to search for the implication of other organs in the phenomenon of temperature-dependent sex determination in reptiles. From the comparison with data obtained in other vertebrates, we propose two main directions for future research: to examine how transcription of the aromatase gene is regulated and to identify molecular and cellular targets of oestrogens in gonads during sex differentiation, in species with strict genotypic sex determination and species with temperature-dependent sex determination.",
"title": ""
},
{
"docid": "92963d6a511d5e0a767aa34f8932fe86",
"text": "A 77-GHz transmit-array on dual-layer printed circuit board (PCB) is proposed for automotive radar applications. Coplanar patch unit-cells are etched on opposite sides of the PCB and connected by through-via. The unit-cells are arranged in concentric rings to form the transmit-array for 1-bit in-phase transmission. When combined with four-substrate-integrated waveguide (SIW) slot antennas as the primary feeds, the transmit-array is able to generate four beams with a specific coverage of ±15°. The simulated and measured results of the antenna prototype at 76.5 GHz agree well, with gain greater than 18.5 dBi. The coplanar structure significantly simplifies the transmit-array design and eases the fabrication, in particular, at millimeter-wave frequencies.",
"title": ""
},
{
"docid": "4d56f134c2e2a597948bcf9b1cf37385",
"text": "This paper focuses on semantic scene completion, a task for producing a complete 3D voxel representation of volumetric occupancy and semantic labels for a scene from a single-view depth map observation. Previous work has considered scene completion and semantic labeling of depth maps separately. However, we observe that these two problems are tightly intertwined. To leverage the coupled nature of these two tasks, we introduce the semantic scene completion network (SSCNet), an end-to-end 3D convolutional network that takes a single depth image as input and simultaneously outputs occupancy and semantic labels for all voxels in the camera view frustum. Our network uses a dilation-based 3D context module to efficiently expand the receptive field and enable 3D context learning. To train our network, we construct SUNCG - a manually created largescale dataset of synthetic 3D scenes with dense volumetric annotations. Our experiments demonstrate that the joint model outperforms methods addressing each task in isolation and outperforms alternative approaches on the semantic scene completion task. The dataset and code is available at http://sscnet.cs.princeton.edu.",
"title": ""
},
{
"docid": "137b9760d265304560f1cac14edb7f21",
"text": "Gallstones are solid particles formed from bile in the gall bladder. In this paper, we propose a technique to automatically detect Gallstones in ultrasound images, christened as, Automated Gallstone Segmentation (AGS) Technique. Speckle Noise in the ultrasound image is first suppressed using Anisotropic Diffusion Technique. The edges are then enhanced using Unsharp Filtering. NCUT Segmentation Technique is then put to use to segment the image. Afterwards, edges are detected using Sobel Edge Detection. Further, Edge Thickening Process is used to smoothen the edges and probability maps are generated using Floodfill Technique. Then, the image is scribbled using Automatic Scribbling Technique. Finally, we get the segmented gallstone within the gallbladder using the Closed Form Matting Technique.",
"title": ""
},
{
"docid": "64122833d6fa0347f71a9abff385d569",
"text": "We present a brief history and overview of statistical methods in frame-semantic parsing – the automatic analysis of text using the theory of frame semantics. We discuss how the FrameNet lexicon and frameannotated datasets have been used by statistical NLP researchers to build usable, state-of-the-art systems. We also focus on future directions in frame-semantic parsing research, and discuss NLP applications that could benefit from this line of work. 1 Frame-Semantic Parsing Frame-semantic parsing has been considered as the task of automatically finding semantically salient targets in text, disambiguating their semantic frame representing an event and scenario in discourse, and annotating arguments consisting of words or phrases in text with various frame elements (or roles). The FrameNet lexicon (Baker et al., 1998), an ontology inspired by the theory of frame semantics (Fillmore, 1982), serves as a repository of semantic frames and their roles. Figure 1 depicts a sentence with three evoked frames for the targets “million”, “created” and “pushed” with FrameNet frames and roles. Automatic analysis of text using framesemantic structures can be traced back to the pioneering work of Gildea and Jurafsky (2002). Although their experimental setup relied on a primitive version of FrameNet and only made use of “exemplars” or example usages of semantic frames (containing one target per sentence) as opposed to a “corpus” of sentences, it resulted in a flurry of work in the area of automatic semantic role labeling (Màrquez et al., 2008). However, the focus of semantic role labeling (SRL) research has mostly been on PropBank (Palmer et al., 2005) conventions, where verbal targets could evoke a “sense” frame, which is not shared across targets, making the frame disambiguation setup different from the representation in FrameNet. Furthermore, it is fair to say that early research on PropBank focused primarily on argument structure prediction, and the interaction between frame and argument structure analysis has mostly been unaddressed (Màrquez et al., 2008). There are exceptions, where the verb frame has been taken into account during SRL (Meza-Ruiz and Riedel, 2009; Watanabe et al., 2010). Moreoever, the CoNLL 2008 and 2009 shared tasks also include the verb and noun frame identification task in their evaluations, although the overall goal was to predict semantic dependencies based on PropBank, and not full argument spans (Surdeanu et al., 2008; Hajič",
"title": ""
},
{
"docid": "6d26012bd529735410477c9f389bbf73",
"text": "Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task, thus real world agents have to plan with incomplete domain models. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In this paper, we study planning problems with incomplete domain models where the annotations specify possible preconditions and effects of actions. We show that the problem of assessing the quality of a plan, or its plan robustness, is #P -complete, establishing its equivalence with the weighted model counting problems. We present two approaches to synthesizing robust plans. While the method based on the compilation to conformant probabilistic planning is much intuitive, its performance appears to be limited to only small problem instances. Our second approach based on stochastic heuristic search works well for much larger problems. It aims to use the robustness measure directly for estimating heuristic distance, which is then used to guide the search. Our planning system, PISA, outperforms a state-of-the-art planner handling incomplete domain models in most of the tested domains, both in terms of plan quality and planning time. Finally, we also present an extension of PISA called CPISA that is able to exploit the available of past successful plan traces to both improve the robustness of the synthesized plans and reduce the domain modeling burden.",
"title": ""
},
{
"docid": "223d5658dee7ba628b9746937aed9bb3",
"text": "A low-power receiver with a one-tap data and edge decision-feedback equalizer (DFE) and a clock recovery circuit is presented. The receiver employs analog adders for the tap-weight summation in both the data and the edge path to simultaneously optimize both the voltage and timing margins. A switched-capacitor input stage allows the receiver to be fully compatible with near-GND input levels without extra level conversion circuits. Furthermore, the critical path of the DFE is simplified to relax the timing margin. Fabricated in the 65-nm CMOS technology, a prototype DFE receiver shows that the data-path DFE extends the voltage and timing margins from 40 mVpp and 0.3 unit interval (UI), respectively, to 70 mVpp and 0.6 UI, respectively. Likewise, the edge-path equalizer reduces the uncertain sampling region (the edge region), which results in 17% reduction of the recovered clock jitter. The DFE core, including adders and samplers, consumes 1.1 mW from a 1.2-V supply while operating at 6.4 Gb/s.",
"title": ""
},
{
"docid": "42392af599ce65f38748420353afc534",
"text": "An innovative technology for the mass production ofstretchable printed circuit boards (SCBs) will bepresented in this paper. This technology makes itpossible for the first time to really integrate fine pitch,high performance electronic circuits easily into textilesand so may be the building block for a totally newgeneration of wearable electronic systems. Anoverview of the technology will be given andsubsequently a real system using SCB technology ispresented.",
"title": ""
},
{
"docid": "aaa2c8a7367086cd762f52b6a6c30df6",
"text": "Many mature term-based or pattern-based approaches have been used in the field of information filtering to generate users' information needs from a collection of documents. A fundamental assumption for these approaches is that the documents in the collection are all about one topic. However, in reality users' interests can be diverse and the documents in the collection often involve multiple topics. Topic modelling, such as Latent Dirichlet Allocation (LDA), was proposed to generate statistical models to represent multiple topics in a collection of documents, and this has been widely utilized in the fields of machine learning and information retrieval, etc. But its effectiveness in information filtering has not been so well explored. Patterns are always thought to be more discriminative than single terms for describing documents. However, the enormous amount of discovered patterns hinder them from being effectively and efficiently used in real applications, therefore, selection of the most discriminative and representative patterns from the huge amount of discovered patterns becomes crucial. To deal with the above mentioned limitations and problems, in this paper, a novel information filtering model, Maximum matched Pattern-based Topic Model (MPBTM), is proposed. The main distinctive features of the proposed model include: (1) user information needs are generated in terms of multiple topics; (2) each topic is represented by patterns; (3) patterns are generated from topic models and are organized in terms of their statistical and taxonomic features; and (4) the most discriminative and representative patterns, called Maximum Matched Patterns, are proposed to estimate the document relevance to the user's information needs in order to filter out irrelevant documents. Extensive experiments are conducted to evaluate the effectiveness of the proposed model by using the TREC data collection Reuters Corpus Volume 1. The results show that the proposed model significantly outperforms both state-of-the-art term-based models and pattern-based models.",
"title": ""
},
{
"docid": "fb7961117dae98e770e0fe84c33673b9",
"text": "Named-Entity Recognition (NER) aims at identifying the fragments of a given text that mention a given entity of interest. This manuscript presents our Minimal named-Entity Recognizer (MER), designed with flexibility, autonomy and efficiency in mind. To annotate a given text, MER only requires a lexicon (text file) with the list of terms representing the entities of interest; and a GNU Bash shell grep and awk tools. MER was deployed in a cloud infrastructure using multiple Virtual Machines to work as an annotation server and participate in the Technical Interoperability and Performance of annotation Servers (TIPS) task of BioCreative V.5. Preliminary results show that our solution processed each document (text retrieval and annotation) in less than 3 seconds on average without using any type of cache. MER is publicly available in a GitHub repository (https://github.com/lasigeBioTM/MER) and through a RESTful Web service (http://labs.fc.ul.pt/mer/).",
"title": ""
},
{
"docid": "a513c25bccbeda0c4314213aea49668a",
"text": "Identity recognition faces several challenges especially in extracting an individual's unique features from biometric modalities and pattern classifications. Electrocardiogram (ECG) waveforms, for instance, have unique identity properties for human recognition, and their signals are not periodic. At present, in order to generate a significant ECG feature set, nonfiducial methodologies based on an autocorrelation (AC) in conjunction with linear dimension reduction methods are used. This paper proposes a new non-fiducial framework for ECG biometric verification using kernel methods to reduce both high autocorrelation vectors' dimensionality and recognition system after denoising signals of 52 subjects with Discrete Wavelet Transform (DWT). The effects of different dimensionality reduction techniques for use in feature extraction were investigated to evaluate verification performance rates of a multi-class Support Vector Machine (SVM) with the One-Against-All (OAA) approach. The experimental results demonstrated higher test recognition rates of Gaussian OAA SVMs on random unknown ECG data sets with the use of the Kernel Principal Component Analysis (KPCA) as compared to the use of the Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA). Keyword: ECG biometric recognition; Non-fiducial feature extraction; Kernel methods; Dimensionality reduction; Gaussian OAA SVM",
"title": ""
},
{
"docid": "2e0e53ff34dccd5412faab5b51a3a2f2",
"text": "This study examines print and online daily newspaper journalists’ perceptions of the credibility of Internet news information, as well as the influence of several factors— most notably, professional role conceptions—on those perceptions. Credibility was measured as a multidimensional construct. The results of a survey of U.S. journalists (N = 655) show that Internet news information was viewed as moderately credible overall and that online newspaper journalists rated Internet news information as significantly more credible than did print newspaper journalists. Hierarchical regression analyses reveal that Internet reliance was a strong positive predictor of credibility. Two professional role conceptions also emerged as significant predictors. The populist mobilizer role conception was a significant positive predictor of online news credibility, while the adversarial role conception was a significant negative predictor. Demographic characteristics of print and online daily newspaper journalists did not influence their perceptions of online news credibility.",
"title": ""
},
{
"docid": "a752279721e2bf6142a0ca34a1a708f3",
"text": "Zika virus (ZIKV) is a mosquito-borne flavivirus first isolated in Uganda from a sentinel monkey in 1947. Mosquito and sentinel animal surveillance studies have demonstrated that ZIKV is endemic to Africa and Southeast Asia, yet reported human cases are rare, with <10 cases reported in the literature. In June 2007, an epidemic of fever and rash associated with ZIKV was detected in Yap State, Federated States of Micronesia. We report the genetic and serologic properties of the ZIKV associated with this epidemic.",
"title": ""
}
] | scidocsrr |
dc9b28f89bc3939ec6b55eb4ce11ab84 | Computer-Based Clinical Decision Support System for Prediction of Heart Diseases Using Naïve Bayes Algorithm | [
{
"docid": "30d7f140a5176773611b3c1f8ec4953e",
"text": "The healthcare environment is generally perceived as being ‘information rich’ yet ‘knowledge poor’. There is a wealth of data available within the healthcare systems. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. Knowledge discovery and data mining have found numerous applications in business and scientific domain. Valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, we briefly examine the potential use of classification based data mining techniques such as Rule based, decision tree and Artificial Neural Network to massive volume of healthcare data. In particular we consider a case study using classification techniques on a medical data set of diabetic patients.",
"title": ""
}
] | [
{
"docid": "eea9332a263b7e703a60c781766620e5",
"text": "The use of topic models to analyze domainspecific texts often requires manual validation of the latent topics to ensure that they are meaningful. We introduce a framework to support such a large-scale assessment of topical relevance. We measure the correspondence between a set of latent topics and a set of reference concepts to quantify four types of topical misalignment: junk, fused, missing, and repeated topics. Our analysis compares 10,000 topic model variants to 200 expertprovided domain concepts, and demonstrates how our framework can inform choices of model parameters, inference algorithms, and intrinsic measures of topical quality.",
"title": ""
},
{
"docid": "29b1aa2ead1e961ddf9ae85e4b53ffa5",
"text": "Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.",
"title": ""
},
{
"docid": "42c2e599dbbb00784e2a6837ebd17ade",
"text": "Several real-world classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples. However, standard classification methods do not take these costs into account, and assume a constant cost of misclassification errors. State-of-the-art example-dependent cost-sensitive techniques only introduce the cost to the algorithm, either before or after training, therefore, leaving opportunities to investigate the potential impact of algorithms that take into account the real financial example-dependent costs during an algorithm training. In this paper, we propose an example-dependent cost-sensitive decision tree algorithm, by incorporating the different example-dependent costs into a new cost-based impurity measure and a new cost-based pruning criteria. Then, using three different databases, from three real-world applications: credit card fraud detection, credit scoring and direct marketing, we evaluate the proposed method. The results show that the proposed algorithm is the best performing method for all databases. Furthermore, when compared against a standard decision tree, our method builds significantly smaller trees in only a fifth of the time, while having a superior performance measured by cost savings, leading to a method that not only has more business-oriented results, but also a method that creates simpler models that are easier to analyze. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9f987cd94d103fb3d4496b7d95b6079f",
"text": "In the world of sign language, and gestures, a lot of research work has been done over the past three decades. This has brought about a gradual transition from isolated to continuous, and static to dynamic gesture recognition for operations on a limited vocabulary. In present scenario, human machine interactive systems facilitate communication between the deaf, and hearing people in real world situations. In order to improve the accuracy of recognition, many researchers have deployed methods such as HMM, Artificial Neural Networks, and Kinect platform. Effective algorithms for segmentation, classification, pattern matching and recognition have evolved. The main purpose of this paper is to analyze these methods and to effectively compare them, which will enable the reader to reach an optimal solution. This creates both, challenges and opportunities for sign language recognition related research. KeywordsSign Language Recognition, Hidden Markov Model, Artificial Neural Network, Kinect Platform, Fuzzy Logic.",
"title": ""
},
{
"docid": "14857144b52dbfb661d6ef4cd2c59b64",
"text": "The candidate confirms that the work submitted is his/her own and that appropriate credit has been given where reference has been made to the work of others. i ACKNOWLEDGMENT I am truly indebted and thankful to my scholarship sponsor ―National Information Technology Development Agency (NITDA), Nigeria‖ for giving me the rare privilege to study at the University of Leeds. I am sincerely and heartily grateful to my supervisor Dr. Des McLernon for his valuable support, patience and guidance throughout the course of this dissertation. I am sure it would not have been possible without his help. I would like to express my deep gratitude to Romero-Zurita Nabil for his enthusiastic encouragement, useful critique, recommendation and providing me with great information resources. I also acknowledge my colleague Frempong Kwadwo for his invaluable suggestions and discussion. Finally, I would like to appreciate my parents for their support and encouragement throughout my study at Leeds. Above all, special thanks to God Almighty for the gift of life. ii DEDICATION This thesis is dedicated to family especially; to my parents for inculcating the importance of hardwork and higher education to Omobolanle for being a caring and loving sister. to Abimbola for believing in me.",
"title": ""
},
{
"docid": "e08e0eea0e3f3735b53f9eb76c155f9c",
"text": "The temporal-difference methods TD(λ) and Sarsa(λ) form a core part of modern reinforcement learning. Their appeal comes from their good performance, low computational cost, and their simple interpretation, given by their forward view. Recently, new versions of these methods were introduced, called true online TD(λ) and true online Sarsa(λ), respectively (van Seijen and Sutton, 2014). Algorithmically, these true online methods only make two small changes to the update rules of the regular methods, and the extra computational cost is negligible in most cases. However, they follow the ideas underlying the forward view much more closely. In particular, they maintain an exact equivalence with the forward view at all times, whereas the traditional versions only approximate it for small step-sizes. We hypothesize that these true online methods not only have better theoretical properties, but also dominate the regular methods empirically. In this article, we put this hypothesis to the test by performing an extensive empirical comparison. Specifically, we compare the performance of true online TD(λ)/Sarsa(λ) with regular TD(λ)/Sarsa(λ) on random MRPs, a real-world myoelectric prosthetic arm, and a domain from the Arcade Learning Environment. We use linear function approximation with tabular, binary, and non-binary features. Our results suggest that the true online methods indeed dominate the regular methods. Across all domains/representations the learning speed of the true online methods are often better, but never worse than that of the regular methods. An additional advantage is that no choice between traces has to be made for the true online methods. We show that new true online temporal-difference methods can be derived by making changes to the real-time forward view and then rewriting the update equations.",
"title": ""
},
{
"docid": "a2688a1169babed7e35a52fa875505d4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "faf9c570aacd161296de180850153078",
"text": "Two problems occur when bundle adjustment (BA) is applied on long image sequences: the large calculation time and the drift (or error accumulation). In recent work, the calculation time is reduced by local BAs applied in an incremental scheme. The drift may be reduced by fusion of GPS and Structure-from-Motion. An existing fusion method is BA minimizing a weighted sum of image and GPS errors. This paper introduces two constrained BAs for fusion, which enforce an upper bound for the reprojection error. These BAs are alternatives to the existing fusion BA, which does not guarantee a small reprojection error and requires a weight as input. Then the three fusion BAs are integrated in an incremental Structure-from-Motion method based on local BA. Lastly, we will compare the fusion results on a long monocular image sequence and a low cost GPS.",
"title": ""
},
{
"docid": "203f34a946e00211ebc6fce8e2a061ed",
"text": "We propose a new personalized document summarization method that observes a user's personal reading preferences. These preferences are inferred from the user's reading behaviors, including facial expressions, gaze positions, and reading durations that were captured during the user's past reading activities. We compare the performance of our algorithm with that of a few peer algorithms and software packages. The results of our comparative study show that our algorithm can produce more superior personalized document summaries than all the other methods in that the summaries generated by our algorithm can better satisfy a user's personal preferences.",
"title": ""
},
{
"docid": "c6befaca710e45101b9a12dbc8110a0b",
"text": "The realized strategy contents of information systems (IS) strategizing are a result of both deliberate and emergent patterns of action. In this paper, we focus on emergent patterns of action by studying the formation of strategies that build on local technology-mediated practices. This is done through case study research of the emergence of a sustainability strategy at a European automaker. Studying the practices of four organizational sub-communities, we develop a process perspective of sub-communities’ activity-based production of strategy contents. The process model explains the contextual conditions that make subcommunities initiate SI strategy contents production, the activity-based process of strategy contents production, and the IS strategy outcome. The process model, which draws on Jarzabkowski’s strategy-as-practice lens and Mintzberg’s strategy typology, contributes to the growing IS strategizing literature that examines local practices in IS efforts of strategic importance. 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "d247f00420b872fb0153a343d2b44dd3",
"text": "Network embedding in heterogeneous information networks (HINs) is a challenging task, due to complications of different node types and rich relationships between nodes. As a result, conventional network embedding techniques cannot work on such HINs. Recently, metapathbased approaches have been proposed to characterize relationships in HINs, but they are ineffective in capturing rich contexts and semantics between nodes for embedding learning, mainly because (1) metapath is a rather strict single path node-node relationship descriptor, which is unable to accommodate variance in relationships, and (2) only a small portion of paths can match the metapath, resulting in sparse context information for embedding learning. In this paper, we advocate a new metagraph concept to capture richer structural contexts and semantics between distant nodes. A metagraph contains multiple paths between nodes, each describing one type of relationships, so the augmentation of multiple metapaths provides an effective way to capture rich contexts and semantic relations between nodes. This greatly boosts the ability of metapath-based embedding techniques in handling very sparse HINs. We propose a new embedding learning algorithm, namely MetaGraph2Vec, which uses metagraph to guide the generation of random walks and to learn latent embeddings of multi-typed HIN nodes. Experimental results show that MetaGraph2Vec is able to outperform the state-of-theart baselines in various heterogeneous network mining tasks such as node classification, node clustering, and similarity search.",
"title": ""
},
{
"docid": "8b85dc461c11f44e27caaa8c8816a49b",
"text": "In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A∗ can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A∗ does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a modelfree online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significatively overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.",
"title": ""
},
{
"docid": "7894b8eae0ceacc92ef2103f0ea8e693",
"text": "In this paper, different first and second derivative filters are investigated to find edge map after denoising a corrupted gray scale image. We have proposed a new derivative filter of first order and described a novel approach of edge finding with an aim to find better edge map in a restored gray scale image. Subjective method has been used by visually comparing the performance of the proposed derivative filter with other existing first and second order derivative filters. The root mean square error and root mean square of signal to noise ratio have been used for objective evaluation of the derivative filters. Finally, to validate the efficiency of the filtering schemes different algorithms are proposed and the simulation study has been carried out using MATLAB 5.0.",
"title": ""
},
{
"docid": "c388626855099e1e9f8e5f46d4e271fc",
"text": "The literature assumes that Enterprise Resource Planning (ERP) systems are complex tools. Due to this complexity, ERP produce negative impacts on the users’ acceptation. However, few studies have tried to identify the factors that influence the ERP users’ acceptance. This paper’s aim is to focus on decisive factors influencing the ERP users’ acceptance and use. Specifically, the authors have developed a research model based on the Technology Acceptance Model (TAM) for testing the influence of the Critical Success Factors (CSFs) on ERP implementation. The CSFs used are: (1) top management support, (2) communication, (3) cooperation, (4) training and (5) technological complexity. This research model has offered some evidence about main acceptance factors on ERP which help to set the users’ behavior toward ERP. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "df4477952bc78f9ddca6a637b0d9b990",
"text": "Food preference learning is an important component of wellness applications and restaurant recommender systems as it provides personalized information for effective food targeting and suggestions. However, existing systems require some form of food journaling to create a historical record of an individual's meal selections. In addition, current interfaces for food or restaurant preference elicitation rely extensively on text-based descriptions and rating methods, which can impose high cognitive load, thereby hampering wide adoption.\n In this paper, we propose PlateClick, a novel system that bootstraps food preference using a simple, visual quiz-based user interface. We leverage a pairwise comparison approach with only visual content. Using over 10,028 recipes collected from Yummly, we design a deep convolutional neural network (CNN) to learn the similarity distance metric between food images. Our model is shown to outperform state-of-the-art CNN by 4 times in terms of mean Average Precision. We explore a novel online learning framework that is suitable for learning users' preferences across a large scale dataset based on a small number of interactions (≤ 15). Our online learning approach balances exploitation-exploration and takes advantage of food similarities using preference-propagation in locally connected graphs.\n We evaluated our system in a field study of 227 anonymous users. The results demonstrate that our method outperforms other baselines by a significant margin, and the learning process can be completed in less than one minute. In summary, PlateClick provides a light-weight, immersive user experience for efficient food preference elicitation.",
"title": ""
},
{
"docid": "c4f851911ed4bc21d666cce45d5595eb",
"text": "! ABSTRACT Purpose The lack of a security evaluation method might expose organizations to several risky situations. This paper aims at presenting a cyclical evaluation model of information security maturity. Design/methodology/approach This model was developed through the definition of a set of steps to be followed in order to obtain periodical evaluation of maturity and continuous improvement of controls. Findings – This model is based on controls present in ISO/IEC 27002, provides a means to measure the current situation of information security management through the use of a maturity model and provides a subsidy to take appropriate and feasible improvement actions, based on risks. A case study is performed and the results indicate that the method is efficient for evaluating the current state of information security, to support information security management, risks identification and business and internal control processes. Research limitations/implications It is possible that modifications to the process may be needed where there is less understanding of security requirements, such as in a less mature organization. Originality/value This paper presents a generic model applicable to all kinds of organizations. The main contribution of this paper is the use of a maturity scale allied to the cyclical process of evaluation, providing the generation of immediate indicators for the management of information security. !",
"title": ""
},
{
"docid": "9a08871e40f477aac7b2e15fcf4ab266",
"text": "Article history: Accepted 10 November 2015 Available online xxxx This paper investigates the role of heterogeneity in the insurance sector. Here, heterogeneity is represented by different types of insurance provided and regions served. Using a balanced panel data set on Brazilian insurance companies as a case study, results corroborate this underlying hypothesis of heterogeneity's impact on performance. The implications of this research for practitioners andacademics are not only addressed in termsofmarket segmentation —which ones are the best performers—but also in terms of mergers and acquisitions—as long as insurance companies may increase their performance with the right balance of types of insurance offered and regions served. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1bdf1bfe81bf6f947df2254ae0d34227",
"text": "We investigate the problem of incorporating higher-level symbolic score-like information into Automatic Music Transcription (AMT) systems to improve their performance. We use recurrent neural networks (RNNs) and their variants as music language models (MLMs) and present a generative architecture for combining these models with predictions from a frame level acoustic classifier. We also compare different neural network architectures for acoustic modeling. The proposed model computes a distribution over possible output sequences given the acoustic input signal and we present an algorithm for performing a global search for good candidate transcriptions. The performance of the proposed model is evaluated on piano music from the MAPS dataset and we observe that the proposed model consistently outperforms existing transcription methods.",
"title": ""
},
{
"docid": "7332ba6aff8c966d76b1c8f451a02ccf",
"text": "A light-emitting diode (LED) driver compatible with fluorescent lamp (FL) ballasts is presented for a lamp-only replacement without rewiring the existing lamp fixture. Ballasts have a common function to regulate the lamp current, despite widely different circuit topologies. In this paper, magnetic and electronic ballasts are modeled as nonideal current sources and a current-sourced boost converter, which is derived from the duality, is adopted for the power conversion from ballasts. A rectifier circuit with capacitor filaments is proposed to interface the converter with the four-wire output of the ballast. A digital controller emulates the high-voltage discharge of the FL and operates adaptively with various ballasts. A prototype 20-W LED driver for retrofitting T8 36-W FL is evaluated with both magnetic and electronic ballasts. In addition to wide compatibility, accurate regulation of the LED current within 0.6% error and high driver efficiency over 89.7% are obtained.",
"title": ""
}
] | scidocsrr |
a56f23de3827e0be9e6269cbd25ac03e | Wideband, Low-Profile Patch Array Antenna With Corporate Stacked Microstrip and Substrate Integrated Waveguide Feeding Structure | [
{
"docid": "50bd58b07a2cf7bf51ff291b17988a2c",
"text": "A wideband linearly polarized antenna element with complementary sources is proposed and exploited for array antennas. The element covers a bandwidth of 38.7% from 50 to 74 GHz with an average gain of 8.7 dBi. The four-way broad wall coupler is applied for the 2 <inline-formula> <tex-math notation=\"LaTeX\">$\\times $ </tex-math></inline-formula> 2 subarray, which suppresses the cross-polarization of a single element. Based on the designed 2 <inline-formula> <tex-math notation=\"LaTeX\">$ \\times $ </tex-math></inline-formula> 2 subarray, two larger arrays have been designed and measured. The <inline-formula> <tex-math notation=\"LaTeX\">$4 \\times 4$ </tex-math></inline-formula> array exhibits 26.7% bandwidth, fully covering the 57–71 GHz unlicensed band. The <inline-formula> <tex-math notation=\"LaTeX\">$8 \\times 8$ </tex-math></inline-formula> array antenna covers a bandwidth of 14.5 GHz (22.9%) from 56.1 to 70.6 GHz with a peak gain of 26.7 dBi, and the radiation efficiency is around 80% within the matching band. It is demonstrated that the proposed antenna element and arrays can be used for future 5G applications to cover the 22% bandwidth of the unlicensed band with high gain and low loss.",
"title": ""
}
] | [
{
"docid": "45079629c4bc09cc8680b3d9ac325112",
"text": "Power consumption is of utmost concern in sensor networks. Researchers have several ways of measuring the power consumption of a complete sensor network, but they are typically either impractical or inaccurate. To meet the need for practical and scalable measurement of power consumption of sensor networks, we have developed a cycle-accurate simulator, called COOJA/MSPsim, that enables live power estimation of systems running on MSP430 processors. This demonstration shows the ease of use and the power measurement accuracy of COOJA/MSPsim. The demo setup consists of a small sensor network and a laptop. Beside gathering software-based power measurements from the motes, the laptop runs COOJA/MSPsim to simulate the same network.We visualize the power consumption of both the simulated and the real sensor network, and show that the simulator produces matching results.",
"title": ""
},
{
"docid": "678df42df19aa5a15ede86b4a19c49c4",
"text": "This paper presents the fundamentals of Origami engineering and its application in nowadays as well as future industry. Several main cores of mathematical approaches such as HuzitaHatori axioms, Maekawa and Kawasaki’s theorems are introduced briefly. Meanwhile flaps and circle packing by Robert Lang is explained to make understood the underlying principles in designing crease pattern. Rigid origami and its corrugation patterns which are potentially applicable for creating transformable or temporary spaces is discussed to show the transition of origami from paper to thick material. Moreover, some innovative applications of origami such as eyeglass, origami stent and high tech origami based on mentioned theories and principles are showcased in section III; while some updated origami technology such as Vacuumatics, self-folding of polymer sheets and programmable matter folding which could greatlyenhance origami structureare demonstrated in Section IV to offer more insight in future origami. Keywords—Origami, origami application, origami engineering, origami technology, rigid origami.",
"title": ""
},
{
"docid": "690544595e0fa2e5f1c40e3187598263",
"text": "In this paper, a methodology is presented and employed for simulating the Internet of Things (IoT). The requirement for scalability, due to the possibly huge amount of involved sensors and devices, and the heterogeneous scenarios that might occur, impose resorting to sophisticated modeling and simulation techniques. In particular, multi-level simulation is regarded as a main framework that allows simulating large-scale IoT environments while keeping high levels of detail, when it is needed. We consider a use case based on the deployment of smart services in decentralized territories. A two level simulator is employed, which is based on a coarse agent-based, adaptive parallel and distributed simulation approach to model the general life of simulated entities. However, when needed a finer grained simulator (based on OMNeT++) is triggered on a restricted portion of the simulated area, which allows considering all issues concerned with wireless communications. Based on this use case, it is confirmed that the ad-hoc wireless networking technologies do represent a principle tool to deploy smart services over decentralized countrysides. Moreover, the performance evaluation confirms the viability of utilizing multi-level simulation for simulating large scale IoT environments.",
"title": ""
},
{
"docid": "162823edcbd50579a1d386f88931d59d",
"text": "Elevated liver enzymes are a common scenario encountered by physicians in clinical practice. For many physicians, however, evaluation of such a problem in patients presenting with no symptoms can be challenging. Evidence supporting a standardized approach to evaluation is lacking. Although alterations of liver enzymes could be a normal physiological phenomenon in certain cases, it may also reflect potential liver injury in others, necessitating its further assessment and management. In this article, we provide a guide to primary care clinicians to interpret abnormal elevation of liver enzymes in asymptomatic patients using a step-wise algorithm. Adopting a schematic approach that classifies enzyme alterations on the basis of pattern (hepatocellular, cholestatic and isolated hyperbilirubinemia), we review an approach to abnormal alteration of liver enzymes within each section, the most common causes of enzyme alteration, and suggest initial investigations.",
"title": ""
},
{
"docid": "450aee5811484932e8542eb4f0eefa4d",
"text": "Natural Language Generation systems in interactive settings often face a multitude of choices, given that the communicative effect of each utterance they generate depends crucially on the interplay between its physical circumstances, addressee and interaction history. This is particularly true in interactive and situated settings. In this paper we present a novel approach for situated Natural Language Generation in dialogue that is based on hierarchical reinforcement learning and learns the best utterance for a context by optimisation through trial and error. The model is trained from human–human corpus data and learns particularly to balance the trade-off between efficiency and detail in giving instructions: the user needs to be given sufficient information to execute their task, but without exceeding their cognitive load. We present results from simulation and a task-based human evaluation study comparing two different versions of hierarchical reinforcement learning: One operates using a hierarchy of policies with a large state space and local knowledge, and the other additionally shares knowledge across generation subtasks to enhance performance. Results show that sharing knowledge across subtasks achieves better performance than learning in isolation, leading to smoother and more successful interactions that are better perceived by human users.",
"title": ""
},
{
"docid": "96344ccc2aac1a7e7fbab96c1355fa10",
"text": "A highly sensitive field-effect sensor immune to environmental potential fluctuation is proposed. The sensor circuit consists of two sensors each with a charge sensing field effect transistor (FET) and an extended sensing gate (SG). By enlarging the sensing gate of an extended gate ISFET, a remarkable sensitivity of 130mV/pH is achieved, exceeding the conventional Nernst limit of 59mV/pH. The proposed differential sensing circuit consists of a pair of matching n-channel and p-channel ion sensitive sensors connected in parallel and biased at a matched transconductance bias point. Potential fluctuations in the electrolyte appear as common mode signal to the differential pair and are cancelled by the matched transistors. This novel differential measurement technique eliminates the need for a true reference electrode such as the bulky Ag/AgCl reference electrode and enables the use of the sensor for autonomous and implantable applications.",
"title": ""
},
{
"docid": "8129b5aae31133afbb8a145d4ac131fc",
"text": "Community health workers (CHWs) are promoted as a mechanism to increase community involvement in health promotion efforts, despite little consensus about the role and its effectiveness. This article reviews the databased literature on CHW effectiveness, which indicates preliminary support for CHWs in increasing access to care, particularly in underserved populations. There are a smaller number of studies documenting outcomes in the areas of increased health knowledge, improved health status outcomes, and behavioral changes, with inconclusive results. Although CHWs show some promise as an intervention, the role can be doomed by overly high expectations, lack of a clear focus, and lack of documentation. Further research is required with an emphasis on stronger study design, documentation of CHW activities, and carefully defined target populations.",
"title": ""
},
{
"docid": "31404322fb03246ba2efe451191e29fa",
"text": "OBJECTIVES\nThe aim of this study is to report an unusual form of penile cancer presentation associated with myiasis infestation, treatment options and outcomes.\n\n\nMATERIALS AND METHODS\nWe studied 10 patients with suspected malignant neoplasm of the penis associated with genital myiasis infestation. Diagnostic assessment was conducted through clinical history, physical examination, penile biopsy, larvae identification and computerized tomography scan of the chest, abdomen and pelvis. Clinical and pathological staging was done according to 2002 TNM classification system. Radical inguinal lymphadenectomy was conducted according to the primary penile tumor pathology and clinical lymph nodes status.\n\n\nRESULTS\nPatients age ranged from 41 to 77 years (mean=62.4). All patients presented squamous cell carcinoma of the penis in association with myiasis infestation caused by Psychoda albipennis. Tumor size ranged from 4cm to 12cm (mean=5.3). Circumcision was conducted in 1 (10%) patient, while penile partial penectomy was performed in 5 (50%). Total penectomy was conducted in 2 (20%) patients, while emasculation was the treatment option for 2 (20%). All patients underwent radical inguinal lymphadenectomy. Prophylactic lymphadenectomy was performed on 3 (30%) patients, therapeutic on 5 (50%), and palliative lymphadenectomy on 2 (20%) patients. Time elapsed from primary tumor treatment to radical inguinal lymphadenectomy was 2 to 6 weeks. The mean follow-up was 34.3 months.\n\n\nCONCLUSION\nThe occurrence of myiasis in the genitalia is more common in patients with precarious hygienic practices and low socio-economic level. The treatment option varied according to the primary tumor presentation and clinical lymph node status.",
"title": ""
},
{
"docid": "26bd615c16b99e84b787b573d6028878",
"text": "Extendible hashing is a new access technique, in which the user is guaranteed no more than two page faults to locate the data associated with a given unique identifier, or key. Unlike conventional hashing, extendible hashing has a dynamic structure that grows and shrinks gracefully as the database grows and shrinks. This approach simultaneously solves the problem of making hash tables that are extendible and of making radix search trees that are balanced. We study, by analysis and simulation, the performance of extendible hashing. The results indicate that extendible hashing provides an attractive alternative to other access methods, such as balanced trees.",
"title": ""
},
{
"docid": "c4e8dbd875e35e5bd9bd55ca24cdbfc2",
"text": "In this paper, we introduce a new framework for recognizing textual entailment which depends on extraction of the set of publiclyheld beliefs – known as discourse commitments – that can be ascribed to the author of a text or a hypothesis. Once a set of commitments have been extracted from a t-h pair, the task of recognizing textual entailment is reduced to the identification of the commitments from a t which support the inference of the h. Promising results were achieved: our system correctly identified more than 80% of examples from the RTE-3 Test Set correctly, without the need for additional sources of training data or other web-based resources.",
"title": ""
},
{
"docid": "e4069b8312b8a273743b31b12b1dfbae",
"text": "Automatic keyphrase extraction techniques play an important role for many tasks including indexing, categorizing, summarizing, and searching. In this paper, we develop and evaluate an automatic keyphrase extraction system for scientific documents. Compared with previous work, our system concentrates on two important issues: (1) more precise location for potential keyphrases: a new candidate phrase generation method is proposed based on the core word expansion algorithm, which can reduce the size of the candidate set by about 75% without increasing the computational complexity; (2) overlap elimination for the output list: when a phrase and its sub-phrases coexist as candidates, an inverse document frequency feature is introduced for selecting the proper granularity. Additional new features are added for phrase weighting. Experiments based on real-world datasets were carried out to evaluate the proposed system. The results show the efficiency and effectiveness of the refined candidate set and demonstrate that the new features improve the accuracy of the system. The overall performance of our system compares favorably with other state-of-the-art keyphrase extraction systems.",
"title": ""
},
{
"docid": "122a27336317372a0d84ee353bb94a4b",
"text": "Recently, many advanced machine learning approaches have been proposed for coreference resolution; however, all of the discriminatively-trained models reason over mentions rather than entities. That is, they do not explicitly contain variables indicating the “canonical” values for each attribute of an entity (e.g., name, venue, title, etc.). This canonicalization step is typically implemented as a post-processing routine to coreference resolution prior to adding the extracted entity to a database. In this paper, we propose a discriminatively-trained model that jointly performs coreference resolution and canonicalization, enabling features over hypothesized entities. We validate our approach on two different coreference problems: newswire anaphora resolution and research paper citation matching, demonstrating improvements in both tasks and achieving an error reduction of up to 62% when compared to a method that reasons about mentions only.",
"title": ""
},
{
"docid": "d97b2b028fbfe0658e841954958aac06",
"text": "Videogame control interfaces continue to evolve beyond their traditional roots, with devices encouraging more natural forms of interaction growing in number and pervasiveness. Yet little is known about their true potential for intuitive use. This paper proposes methods to leverage existing intuitive interaction theory for games research, specifically by examining different types of naturally mapped control interfaces for videogames using new measures for previous player experience. Three commercial control devices for a racing game were categorised using an existing typology, according to how the interface maps physical control inputs with the virtual gameplay actions. The devices were then used in a within-groups (n=64) experimental design aimed at measuring differences in intuitive use outcomes. Results from mixed design ANOVA are discussed, along with implications for the field.",
"title": ""
},
{
"docid": "99d9dcef0e4441ed959129a2a705c88e",
"text": "Wikipedia has grown to a huge, multi-lingual source of encyclopedic knowledge. Apart from textual content, a large and everincreasing number of articles feature so-called infoboxes, which provide factual information about the articles’ subjects. As the different language versions evolve independently, they provide different information on the same topics. Correspondences between infobox attributes in different language editions can be leveraged for several use cases, such as automatic detection and resolution of inconsistencies in infobox data across language versions, or the automatic augmentation of infoboxes in one language with data from other language versions. We present an instance-based schema matching technique that exploits information overlap in infoboxes across different language editions. As a prerequisite we present a graph-based approach to identify articles in different languages representing the same real-world entity using (and correcting) the interlanguage links in Wikipedia. To account for the untyped nature of infobox schemas, we present a robust similarity measure that can reliably quantify the similarity of strings with mixed types of data. The qualitative evaluation on the basis of manually labeled attribute correspondences between infoboxes in four of the largest Wikipedia editions demonstrates the effectiveness of the proposed approach. 1. Entity and Attribute Matching across Wikipedia Languages Wikipedia is a well-known public encyclopedia. While most of the information contained in Wikipedia is in textual form, the so-called infoboxes provide semi-structured, factual information. They are displayed as tables in many Wikipedia articles and state basic facts about the subject. There are different templates for infoboxes, each targeting a specific category of articles and providing fields for properties that are relevant for the respective subject type. For example, in the English Wikipedia, there is a class of infoboxes about companies, one to describe the fundamental facts about countries (such as their capital and population), one for musical artists, etc. However, each of the currently 281 language versions1 defines and maintains its own set of infobox classes with their own set of properties, as well as providing sometimes different values for corresponding attributes. Figure 1 shows extracts of the English and German infoboxes for the city of Berlin. The arrows indicate matches between properties. It is already apparent that matching purely based on property names is futile: The terms Population density and Bevölkerungsdichte or Governing parties and Reg. Parteien have no textual similarity. However, their property values are more revealing: <3,857.6/km2> and <3.875 Einw. je km2> or <SPD/Die Linke> and <SPD und Die Linke> have a high textual similarity, respectively. Email addresses: [email protected] (Daniel Rinser), [email protected] (Dustin Lange), [email protected] (Felix Naumann) 1as of March 2011 Our overall goal is to automatically find a mapping between attributes of infobox templates across different language versions. Such a mapping can be valuable for several different use cases: First, it can be used to increase the information quality and quantity in Wikipedia infoboxes, or at least help the Wikipedia communities to do so. Inconsistencies among the data provided by different editions for corresponding attributes could be detected automatically. For example, the infobox in the English article about Germany claims that the population is 81,799,600, while the German article specifies a value of 81,768,000 for the same country. Detecting such conflicts can help the Wikipedia communities to increase consistency and information quality across language versions. Further, the detected inconsistencies could be resolved automatically by fusing the data in infoboxes, as proposed by [1]. Finally, the coverage of information in infoboxes could be increased significantly by completing missing attribute values in one Wikipedia edition with data found in other editions. An infobox template does not describe a strict schema, so that we need to collect the infobox template attributes from the template instances. For the purpose of this paper, an infobox template is determined by the set of attributes that are mentioned in any article that reference the template. The task of matching attributes of corresponding infoboxes across language versions is a specific application of schema matching. Automatic schema matching is a highly researched topic and numerous different approaches have been developed for this task as surveyed in [2] and [3]. Among these, schema-level matchers exploit attribute labels, schema constraints, and structural similarities of the schemas. However, in the setting of Wikipedia infoboxes these Preprint submitted to Information Systems October 19, 2012 Figure 1: A mapping between the English and German infoboxes for Berlin techniques are not useful, because infobox definitions only describe a rather loose list of supported properties, as opposed to a strict relational or XML schema. Attribute names in infoboxes are not always sound, often cryptic or abbreviated, and the exact semantics of the attributes are not always clear from their names alone. Moreover, due to our multi-lingual scenario, attributes are labeled in different natural languages. This latter problem might be tackled by employing bilingual dictionaries, if the previously mentioned issues were solved. Due to the flat nature of infoboxes and their lack of constraints or types, other constraint-based matching approaches must fail. On the other hand, there are instance-based matching approaches, which leverage instance data of multiple data sources. Here, the basic assumption is that similarity of the instances of the attributes reflects the similarity of the attributes. To assess this similarity, instance-based approaches usually analyze the attributes of each schema individually, collecting information about value patterns and ranges, amongst others, such as in [4]. A different, duplicate-based approach exploits information overlap across data sources [5]. The idea there is to find two representations of same real-world objects (duplicates) and then suggest mappings between attributes that have the same or similar values. This approach has one important requirement: The data sources need to share a sufficient amount of common instances (or tuples, in a relational setting), i.e., instances describing the same real-world entity. Furthermore, the duplicates either have to be known in advance or have to be discovered despite a lack of knowledge of corresponding attributes. The approach presented in this article is based on such duplicate-based matching. Our approach consists of three steps: Entity matching, template matching, and attribute matching. The process is visualized in Fig. 2. (1) Entity matching: First, we find articles in different language versions that describe the same real-world entity. In particular, we make use of the crosslanguage links that are present for most Wikipedia articles and provide links between same entities across different language versions. We present a graph-based approach to resolve conflicts in the linking information. (2) Template matching: We determine a cross-lingual mapping between infobox templates by analyzing template co-occurrences in the language versions. (3) Attribute matching: The infobox attribute values of the corresponding articles are compared to identify matching attributes across the language versions, assuming that the values of corresponding attributes are highly similar for the majority of article pairs. As a first step we analyze the quality of Wikipedia’s interlanguage links in Sec. 2. We show how to use those links to create clusters of semantically equivalent entities with only one entity from each language in Sec. 3. This entity matching approach is evaluated in Sec. 4. In Sec. 5, we show how a crosslingual mapping between infobox templates can be established. The infobox attribute matching approach is described in Sec. 6 and in turn evaluated in Sec. 7. Related work in the areas of ILLs, concept identification, and infobox attribute matching is discussed in Sec. 8. Finally, Sec. 9 draws conclusions and discusses future work. 2. Interlanguage Links Our basic assumption is that there is a considerable amount of information overlap across the different Wikipedia language editions. Our infobox matching approach presented later requires mappings between articles in different language editions",
"title": ""
},
{
"docid": "a1b5821ec18904ad805c57e6b478ef92",
"text": "To extract English name mentions, we apply a linear-chain CRFs model trained from ACE 20032005 corpora (Li et al., 2012a). For Chinese and Spanish, we use Stanford name tagger (Finkel et al., 2005). We also encode several regular expression based rules to extract poster name mentions in discussion forum posts. In this year’s task, person nominal mentions extraction is added. There are two major challenges: (1) Only person nominal mentions referring to specific, individual real-world entities need to be extracted. Therefore, a system should be able to distinguish specific and generic person nominal mentions; (2) within-document coreference resolution should be applied to clustering person nominial and name mentions. We apply heuristic rules to try to solve these two challenges: (1) We consider person nominal mentions that appear after indefinite articles (e.g., a/an) or conditional conjunctions (e.g., if ) as generic. The person nomnial mention extraction F1 score of this approach is around 46% for English training data. (2) For coreference resolution, if the closest mention of a person nominal mention is a name, then we consider they are coreferential. The accuracy of this approach is 67% using perfect mentions in English training data.",
"title": ""
},
{
"docid": "8ea17804db874a0434bd61c55bc83aab",
"text": "Some recent work in the field of Genetic Programming (GP) has been concerned with finding optimum representations for evolvable and efficient computer programs. In this paper, I describe a new GP system in which target programs run on a stack-based virtual machine. The system is shown to have certain advantages in terms of efficiency and simplicity of implementation, and for certain classes of problems, its effectiveness is shown to be comparable or superior to current methods.",
"title": ""
},
{
"docid": "61cd88d56bcae85c12dde4c2920af2ec",
"text": "“Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St” vs. “Walk east on Flinders St/State Route 30 towards Market St; Turn right onto St Kilda Rd/Swanston St after Flinders Street Station, a yellow building with a green dome.” T1: <Flinders Street Station, front, Federation Square> T2: <Flinders Street Station, color, yellow> T3: <Flinders Street Station, has, green dome> Sent: Flinders Street Station is a yellow building with a green dome roof located in front of Federation Square",
"title": ""
},
{
"docid": "0b3291e5ddfdd51a75340b195b7ffbfe",
"text": "e Knowledge graph (KG) uses the triples to describe the facts in the real world. It has been widely used in intelligent analysis and applications. However, possible noises and conicts are inevitably introduced in the process of constructing. And the KG based tasks or applications assume that the knowledge in the KG is completely correct and inevitably bring about potential deviations. In this paper, we establish a knowledge graph triple trustworthiness measurement model that quantify their semantic correctness and the true degree of the facts expressed. e model is a crisscrossing neural network structure. It synthesizes the internal semantic information in the triples and the global inference information of the KG to achieve the trustworthiness measurement and fusion in the three levels of entity level, relationship level, and KG global level. We analyzed the validity of the model output condence values, and conducted experiments in the real-world dataset FB15K (from Freebase) for the knowledge graph error detection task. e experimental results showed that compared with other models, our model achieved signicant and consistent improvements.",
"title": ""
},
{
"docid": "a11b39c895f7a89b7d2df29126671057",
"text": "A typical NURBS surface model has a large percentage of superfluous control points that significantly interfere with the design process. This paper presents an algorithm for eliminating such superfluous control points, producing a T-spline. The algorithm can remove substantially more control points than competing methods such as B-spline wavelet decomposition. The paper also presents a new T-spline local refinement algorithm and answers two fundamental open questions on T-spline theory.",
"title": ""
},
{
"docid": "4b546f3bc34237d31c862576ecf63f9a",
"text": "Optimizing the internal supply chain for direct or production goods was a major element during the implementation of enterprise resource planning systems (ERP) which has taken place since the late 1980s. However, supply chains to the suppliers of indirect materials were not usually included due to low transaction volumes, low product values and low strategic importance of these goods. With the advent of the Internet, systems for streamlining indirect goods supply chains emerged and were adopted by many companies. In view of the paperprone processes in many companies, the implementation of these electronic procurement systems led to substantial improvement potentials. This research reports the quantitative and qualitative results of a benchmarking study which explores the use of the Internet in procurement (eProcurement). Among the major goals are to obtain more insight on how European and North American companies used and introduced eProcurement solutions as well as how these systems enhanced the procurement function. The analysis presents a heterogeneous picture and shows that all analyzed solutions emphasize different parts of the procurement and coordination process. Based on interviews and case studies the research proposes an initial set of generalized success factors which may improve future implementations and stimulate further success factor research.",
"title": ""
}
] | scidocsrr |
df6567247f9e63497797c4b6703b9f8b | Task Scheduling and Server Provisioning for Energy-Efficient Cloud-Computing Data Centers | [
{
"docid": "95c41c6f901685490c912a2630c04345",
"text": "Network-based cloud computing is rapidly expanding as an alternative to conventional office-based computing. As cloud computing becomes more widespread, the energy consumption of the network and computing resources that underpin the cloud will grow. This is happening at a time when there is increasing attention being paid to the need to manage energy consumption across the entire information and communications technology (ICT) sector. While data center energy use has received much attention recently, there has been less attention paid to the energy consumption of the transmission and switching networks that are key to connecting users to the cloud. In this paper, we present an analysis of energy consumption in cloud computing. The analysis considers both public and private clouds, and includes energy consumption in switching and transmission as well as data processing and data storage. We show that energy consumption in transport and switching can be a significant percentage of total energy consumption in cloud computing. Cloud computing can enable more energy-efficient use of computing power, especially when the computing tasks are of low intensity or infrequent. However, under some circumstances cloud computing can consume more energy than conventional computing where each user performs all computing on their own personal computer (PC).",
"title": ""
}
] | [
{
"docid": "cf14e5e501cc4e5e3e97561c4932ae8f",
"text": "Plug-and-play information technology (IT) infrastructure has been expanding very rapidly in recent years. With the advent of cloud computing, many ecosystem and business paradigms are encountering potential changes and may be able to eliminate their IT infrastructure maintenance processes. Real-time performance and high availability requirements have induced telecom networks to adopt the new concepts of the cloud model: software-defined networking (SDN) and network function virtualization (NFV). NFV introduces and deploys new network functions in an open and standardized IT environment, while SDN aims to transform the way networks function. SDN and NFV are complementary technologies; they do not depend on each other. However, both concepts can be merged and have the potential to mitigate the challenges of legacy networks. In this paper, our aim is to describe the benefits of using SDN in a multitude of environments such as in data centers, data center networks, and Network as Service offerings. We also present the various challenges facing SDN, from scalability to reliability and security concerns, and discuss existing solutions to these challenges. Keywords—Software-Defined Networking, OpenFlow, Datacenters, Network as a Service, Network Function Virtualization.",
"title": ""
},
{
"docid": "3ff82fc754526e7a0255959e4b3f6301",
"text": "We propose a novel statistical analysis method for functional magnetic resonance imaging (fMRI) to overcome the drawbacks of conventional data-driven methods such as the independent component analysis (ICA). Although ICA has been broadly applied to fMRI due to its capacity to separate spatially or temporally independent components, the assumption of independence has been challenged by recent studies showing that ICA does not guarantee independence of simultaneously occurring distinct activity patterns in the brain. Instead, sparsity of the signal has been shown to be more promising. This coincides with biological findings such as sparse coding in V1 simple cells, electrophysiological experiment results in the human medial temporal lobe, etc. The main contribution of this paper is, therefore, a new data driven fMRI analysis that is derived solely based upon the sparsity of the signals. A compressed sensing based data-driven sparse generalized linear model is proposed that enables estimation of spatially adaptive design matrix as well as sparse signal components that represent synchronous, functionally organized and integrated neural hemodynamics. Furthermore, a minimum description length (MDL)-based model order selection rule is shown to be essential in selecting unknown sparsity level for sparse dictionary learning. Using simulation and real fMRI experiments, we show that the proposed method can adapt individual variation better compared to the conventional ICA methods.",
"title": ""
},
{
"docid": "c5ae1d66d31128691e7e7d8e2ccd2ba8",
"text": "The scope of this paper is two-fold: firstly it proposes the application of a 1-2-3 Zones approach to Internet of Things (IoT)-related Digital Forensics (DF) investigations. Secondly, it introduces a Next-Best-Thing Triage (NBT) Model for use in conjunction with the 1-2-3 Zones approach where necessary and vice versa. These two `approaches' are essential for the DF process from an IoT perspective: the atypical nature of IoT sources of evidence (i.e. Objects of Forensic Interest - OOFI), the pervasiveness of the IoT environment and its other unique attributes - and the combination of these attributes - dictate the necessity for a systematic DF approach to incidents. The two approaches proposed are designed to serve as a beacon to incident responders, increasing the efficiency and effectiveness of their IoT-related investigations by maximizing the use of the available time and ensuring relevant evidence identification and acquisition. The approaches can also be applied in conjunction with existing, recognised DF models, methodologies and frameworks.",
"title": ""
},
{
"docid": "3bf0cead54473e6b118ab8835995bc5f",
"text": "A compact printed microstrip-fed monopole ultrawideband antenna with triple notched bands is presented and analyzed in detail. A straight, open-ended quarter-wavelength slot is etched in the radiating patch to create the first notched band in 3.3-3.7 GHz for the WiMAX system. In addition, three semicircular half-wavelength slots are cut in the radiating patch to generate the second and third notched bands in 5.15-5.825 GHz for WLAN and 7.25-7.75 GHz for downlink of X-band satellite communication systems. Surface current distributions and transmission line models are used to analyze the effect of these slots. The antenna is successfully fabricated and measured, showing broad band matched impedance and good omnidirectional radiation pattern. The designed antenna has a compact size of 25 × 29 mm2.",
"title": ""
},
{
"docid": "4d69284c25e1a9a503dd1c12fde23faa",
"text": "Human pose estimation has been actively studied for decades. While traditional approaches rely on 2d data like images or videos, the development of Time-of-Flight cameras and other depth sensors created new opportunities to advance the field. We give an overview of recent approaches that perform human motion analysis which includes depthbased and skeleton-based activity recognition, head pose estimation, facial feature detection, facial performance capture, hand pose estimation and hand gesture recognition. While the focus is on approaches using depth data, we also discuss traditional image based methods to provide a broad overview of recent developments in these areas.",
"title": ""
},
{
"docid": "4357e361fd35bcbc5d6a7c195a87bad1",
"text": "In an age of increasing technology, the possibility that typing on a keyboard will replace handwriting raises questions about the future usefulness of handwriting skills. Here we present evidence that brain activation during letter perception is influenced in different, important ways by previous handwriting of letters versus previous typing or tracing of those same letters. Preliterate, five-year old children printed, typed, or traced letters and shapes, then were shown images of these stimuli while undergoing functional MRI scanning. A previously documented \"reading circuit\" was recruited during letter perception only after handwriting-not after typing or tracing experience. These findings demonstrate that handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading. Handwriting therefore may facilitate reading acquisition in young children.",
"title": ""
},
{
"docid": "859c6f75ac740e311da5e68fcd093531",
"text": "PURPOSE\nTo understand the effect of socioeconomic status (SES) on the risk of complications in type 1 diabetes (T1D), we explored the relationship between SES and major diabetes complications in a prospective, observational T1D cohort study.\n\n\nMETHODS\nComplete data were available for 317 T1D persons within 4 years of age 28 (ages 24-32) in the Pittsburgh Epidemiology of Diabetes Complications Study. Age 28 was selected to maximize income, education, and occupation potential and to minimize the effect of advanced diabetes complications on SES.\n\n\nRESULTS\nThe incidences over 1 to 20 years' follow-up of end-stage renal disease and coronary artery disease were two to three times greater for T1D individuals without, compared with those with a college degree (p < .05 for both), whereas the incidence of autonomic neuropathy was significantly greater for low-income and/or nonprofessional participants (p < .05 for both). HbA(1c) was inversely associated only with income level. In sex- and diabetes duration-adjusted Cox models, lower education predicted end-stage renal disease (hazard ratio [HR], 2.9; 95% confidence interval [95% CI], 1.1-7.7) and coronary artery disease (HR, 2.5, 95% CI, 1.3-4.9), whereas lower income predicted autonomic neuropathy (HR, 1.7; 95% CI, 1.0-2.9) and lower-extremity arterial disease (HR, 3.7; 95% CI, 1.1-11.9).\n\n\nCONCLUSIONS\nThese associations, partially mediated by clinical risk factors, suggest that lower SES T1D individuals may have poorer self-management and, thus, greater complications from diabetes.",
"title": ""
},
{
"docid": "62e445cabbb5c79375f35d7b93f9a30d",
"text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.",
"title": ""
},
{
"docid": "4f23f9ddf35f6e2f7f5ecfcdf28edcea",
"text": "OBJECTIVE\nWe quantified the range of motion (ROM) required for eight upper-extremity activities of daily living (ADLs) in healthy participants.\n\n\nMETHOD\nFifteen right-handed participants completed several bimanual and unilateral basic ADLs while joint kinematics were monitored using a motion capture system. Peak motions of the pelvis, trunk, shoulder, elbow, and wrist were quantified for each task.\n\n\nRESULTS\nTo complete all activities tested, participants needed a minimum ROM of -65°/0°/105° for humeral plane angle (horizontal abduction-adduction), 0°-108° for humeral elevation, -55°/0°/79° for humeral rotation, 0°-121° for elbow flexion, -53°/0°/13° for forearm rotation, -40°/0°/38° for wrist flexion-extension, and -28°/0°/38° for wrist ulnar-radial deviation. Peak trunk ROM was 23° lean, 32° axial rotation, and 59° flexion-extension.\n\n\nCONCLUSION\nFull upper-limb kinematics were calculated for several ADLs. This methodology can be used in future studies as a basis for developing normative databases of upper-extremity motions and evaluating pathology in populations.",
"title": ""
},
{
"docid": "a3ef868300a3c036c2f8802aa6a3793d",
"text": "This paper presents a manifesto directed at developers and designers of internet-of-things creation platforms. Currently, most existing creation platforms are tailored to specific types of end-users, mostly people with a substantial background in or affinity with technology. The thirteen items presented in the manifesto however, resulted from several user studies including non-technical users, and highlight aspects that should be taken into account in order to open up internet-of-things creation to a wider audience. To reach out and involve more people in internet-of-things creation, a relation is made to the social phenomenon of do-it-yourself, which provides valuable insights into how society can be encouraged to get involved in creation activities. Most importantly, the manifesto aims at providing a framework for do-it-yourself systems enabling non-technical users to create internet-of-things applications.",
"title": ""
},
{
"docid": "5d5c3c8cc8344a8c5d18313bec9adb04",
"text": "Research in reinforcement learning (RL) has thus far concentrated on two optimality criteria: the discounted framework, which has been very well-studied, and the average-reward framework, in which interest is rapidly increasing. In this paper, we present a framework called sensitive discount optimality which ooers an elegant way of linking these two paradigms. Although sensitive discount optimality has been well studied in dynamic programming, with several provably convergent algorithms, it has not received any attention in RL. This framework is based on studying the properties of the expected cumulative discounted reward, as discounting tends to 1. Under these conditions, the cumulative discounted reward can be expanded using a Laurent series expansion to yields a sequence of terms, the rst of which is the average reward, the second involves the average adjusted sum of rewards (or bias), etc. We use the sensitive discount optimality framework to derive a new model-free average reward technique, which is related to Q-learning type methods proposed by Bertsekas, Schwartz, and Singh, but which unlike these previous methods, optimizes both the rst and second terms in the Laurent series (average reward and bias values). Statement: This paper has not been submitted to any other conference.",
"title": ""
},
{
"docid": "03dc2c32044a41715991d900bb7ec783",
"text": "The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. We describe a latent factor approach to infer invariants underlying system variables and how we can leverage these relationships to monitor a cyber-physical system. In particular we illustrate how this approach helps rapidly identify outliers during system operation.",
"title": ""
},
{
"docid": "af3af0a4102ea0fb555cad52e4cafa50",
"text": "The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when apriori information about the expected duration of the states is incorporated into the model, such as in a hidden semiMarkov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average F1 score of 95.63 ± 0.85%, while the current state of the art achieved 86.28 ± 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.",
"title": ""
},
{
"docid": "bb240f2e536e5e5cd80fcca8c9d98171",
"text": "We propose a novel metaphor interpretation method, Meta4meaning. It provides interpretations for nominal metaphors by generating a list of properties that the metaphor expresses. Meta4meaning uses word associations extracted from a corpus to retrieve an approximation to properties of concepts. Interpretations are then obtained as an aggregation or difference of the saliences of the properties to the tenor and the vehicle. We evaluate Meta4meaning using a set of humanannotated interpretations of 84 metaphors and compare with two existing methods for metaphor interpretation. Meta4meaning significantly outperforms the previous methods on this task.",
"title": ""
},
{
"docid": "7a82c189c756e9199ae0d394ed9ade7f",
"text": "Since the late 1970s, globalization has become a phenomenon that has elicited polarizing responses from scholars, politicians, activists, and the business community. Several scholars and activists, such as labor unions, see globalization as an anti-democratic movement that would weaken the nation-state in favor of the great powers. There is no doubt that globalization, no matter how it is defined, is here to stay, and is causing major changes on the globe. Given the rapid proliferation of advances in technology, communication, means of production, and transportation, globalization is a challenge to health and well-being worldwide. On an international level, the average human lifespan is increasing primarily due to advances in medicine and technology. The trends are a reflection of increasing health care demands along with the technological advances needed to prevent, diagnose, and treat disease (IOM, 1997). Along with this increase in longevity comes the concern of finding commonalities in the treatment of health disparities for all people. In a seminal work by Friedman (2005), it is posited that the connecting of knowledge into a global network will result in eradication of most of the healthcare translational barriers we face today. Since healthcare is a knowledge-driven profession, it is reasonable to presume that global healthcare will become more than just a buzzword. This chapter looks at all aspects or components of globalization but focuses specifically on how the movement impacts the health of the people and the nations of the world. The authors propose to use the concept of health as a measuring stick of the claims made on behalf of globalization.",
"title": ""
},
{
"docid": "e8e2cd6e4aacbf1427a50e009bfa35cf",
"text": "We present a model that, after learning on observations of (sequence, outcome) pairs, can be efficiently used to revise a new sequence in order to improve its associated outcome. Our framework requires neither example improvements, nor additional evaluation of outcomes for proposed revisions. To avoid combinatorial-search over sequence elements, we specify a generative model with continuous latent factors, which is learned via joint approximate inference using a recurrent variational autoencoder (VAE) and an outcome-predicting neural network module. Under this model, gradient methods can be used to efficiently optimize the continuous latent factors with respect to inferred outcomes. By appropriately constraining this optimization and using the VAE decoder to generate a revised sequence, we ensure the revision is fundamentally similar to the original sequence, is associated with better outcomes, and looks natural. These desiderata are proven to hold with high probability under our approach, which is empirically demonstrated for revising natural language sentences. Introduction The success of recurrent neural network (RNN) models in complex tasks like machine translation and audio synthesis has inspired immense interest in learning from sequence data (Eck & Schmidhuber, 2002; Graves, 2013; Sutskever et al., 2014; Karpathy, 2015). Comprised of elements s t P S , which are typically symbols from a discrete vocabulary, a sequence x “ ps1, . . . , sT q P X has length T which can vary between different instances. Sentences are a popular example of such data, where each s j is a word from the language. In many domains, only a tiny fraction of X (the set of possible sequences over a given vocabulary) represents sequences likely to be found in nature (ie. MIT Computer Science & Artificial Intelligence Laboratory. Correspondence to: J. Mueller <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). those which appear realistic). For example: a random sequence of words will almost never form a coherent sentence that reads naturally, and a random amino-acid sequence is highly unlikely to specify a biologically active protein. In this work, we consider applications where each sequence x is associated with a corresponding outcome y P R. For example: a news article title or Twitter post can be associated with the number of shares it subsequently received online, or the amino-acid sequence of a synthetic protein can be associated with its clinical efficacy. We operate under the standard supervised learning setting, assuming availability of a dataset D",
"title": ""
},
{
"docid": "e7ad934ea591d5b4a6899b5eb2fa1cb3",
"text": "Increases in the size of the pupil of the eye have been found to accompany the viewing of emotionally toned or interesting visual stimuli. A technique for recording such changes has been developed, and preliminary results with cats and human beings are reported with attention being given to differences between the sexes in response to particular types of material.",
"title": ""
},
{
"docid": "a64f1bb761ac8ee302a278df03eecaa8",
"text": "We analyze StirTrace towards benchmarking face morphing forgeries and extending it by additional scaling functions for the face biometrics scenario. We benchmark a Benford's law based multi-compression-anomaly detection approach and acceptance rates of morphs for a face matcher to determine the impact of the processing on the quality of the forgeries. We use 2 different approaches for automatically creating 3940 images of morphed faces. Based on this data set, 86614 images are created using StirTrace. A manual selection of 183 high quality morphs is used to derive tendencies based on the subjective forgery quality. Our results show that the anomaly detection seems to be able to detect anomalies in the morphing regions, the multi-compression-anomaly detection performance after the processing can be differentiated into good (e.g. cropping), partially critical (e.g. rotation) and critical results (e.g. additive noise). The influence of the processing on the biometric matcher is marginal.",
"title": ""
},
{
"docid": "9e0a28a8205120128938b52ba8321561",
"text": "Modeling data with linear combinations of a few elements from a learned dictionary has been the focus of much recent research in machine learning, neuroscience, and signal processing. For signals such as natural images that admit such sparse representations, it is now well established that these models are well suited to restoration tasks. In this context, learning the dictionary amounts to solving a large-scale matrix factorization problem, which can be done efficiently with classical optimization tools. The same approach has also been used for learning features from data for other purposes, e.g., image classification, but tuning the dictionary in a supervised way for these tasks has proven to be more difficult. In this paper, we present a general formulation for supervised dictionary learning adapted to a wide variety of tasks, and present an efficient algorithm for solving the corresponding optimization problem. Experiments on handwritten digit classification, digital art identification, nonlinear inverse image problems, and compressed sensing demonstrate that our approach is effective in large-scale settings, and is well suited to supervised and semi-supervised classification, as well as regression tasks for data that admit sparse representations.",
"title": ""
},
{
"docid": "4b7714c60749a2f945f21ca3d6d367fe",
"text": "Abstractive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.ive summarization aims to generate a shorter version of the document covering all the salient points in a compact and coherent fashion. On the other hand, query-based summarization highlights those points that are relevant in the context of a given query. The encodeattend-decode paradigm has achieved notable success in machine translation, extractive summarization, dialog systems, etc. But it suffers from the drawback of generation of repeated phrases. In this work we propose a model for the query-based summarization task based on the encode-attend-decode paradigm with two key additions (i) a query attention model (in addition to document attention model) which learns to focus on different portions of the query at different time steps (instead of using a static representation for the query) and (ii) a new diversity based attention model which aims to alleviate the problem of repeating phrases in the summary. In order to enable the testing of this model we introduce a new query-based summarization dataset building on debatepedia. Our experiments show that with these two additions the proposed model clearly outperforms vanilla encode-attend-decode models with a gain of 28% (absolute) in ROUGE-L scores.",
"title": ""
}
] | scidocsrr |
dfbe3ab81b76c649f8e79edf81f8c8df | Some Faces are More Equal than Others: Hierarchical Organization for Accurate and Efficient Large-Scale Identity-Based Face Retrieval | [
{
"docid": "535c8a15005505fce4b1dfc09d060981",
"text": "The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult. This has led to the emergence of metric learning, which aims at automatically learning a metric from data and has attracted a lot of interest in machine learning and related fields for the past ten years. This survey paper proposes a systematic review of the metric learning literature, highlighting the pros and cons of each approach. We pay particular attention to Mahalanobis distance metric learning, a well-studied and successful framework, but additionally present a wide range of methods that have recently emerged as powerful alternatives, including nonlinear metric learning, similarity learning and local metric learning. Recent trends and extensions, such as semi-supervised metric learning, metric learning for histogram data and the derivation of generalization guarantees, are also covered. Finally, this survey addresses metric learning for structured data, in particular edit distance learning, and attempts to give an overview of the remaining challenges in metric learning for the years to come.",
"title": ""
}
] | [
{
"docid": "ac910612672c2c46fb2abd039d65e1df",
"text": "In the last few years, there has been a wave of articles related to behavioral addictions; some of them have a focus on online pornography addiction. However, despite all efforts, we are still unable to profile when engaging in this behavior becomes pathological. Common problems include: sample bias, the search for diagnostic instrumentals, opposing approximations to the matter, and the fact that this entity may be encompassed inside a greater pathology (i.e., sex addiction) that may present itself with very diverse symptomatology. Behavioral addictions form a largely unexplored field of study, and usually exhibit a problematic consumption model: loss of control, impairment, and risky use. Hypersexual disorder fits this model and may be composed of several sexual behaviors, like problematic use of online pornography (POPU). Online pornography use is on the rise, with a potential for addiction considering the \"triple A\" influence (accessibility, affordability, anonymity). This problematic use might have adverse effects in sexual development and sexual functioning, especially among the young population. We aim to gather existing knowledge on problematic online pornography use as a pathological entity. Here we try to summarize what we know about this entity and outline some areas worthy of further research.",
"title": ""
},
{
"docid": "54f3c26ab9d9d6afdc9e1bf9e96f02f9",
"text": "Game designers use human playtesting to gather feedback about game design elements when iteratively improving a game. Playtesting, however, is expensive: human testers must be recruited, playtest results must be aggregated and interpreted, and changes to game designs must be extrapolated from these results. Can automated methods reduce this expense? We show how active learning techniques can formalize and automate a subset of playtesting goals. Specifically, we focus on the low-level parameter tuning required to balance a game once the mechanics have been chosen. Through a case study on a shoot-‘em-up game we demonstrate the efficacy of active learning to reduce the amount of playtesting needed to choose the optimal set of game parameters for two classes of (formal) design objectives. This work opens the potential for additional methods to reduce the human burden of performing playtesting for a variety of relevant design concerns.",
"title": ""
},
{
"docid": "b868a1bf3a3a45fbba8ea27527ca47fd",
"text": "Social media and microblog tools are increasingly used by individuals to express their feelings and opinions in the form of short text messages. Detecting emotions in text has a wide range of applications including identifying anxiety or depression of individuals and measuring well-being or public mood of a community. In this paper, we propose a new approach for automatically classifying text messages of individuals to infer their emotional states. To model emotional states, we utilize the well-established Circumplex model that characterizes affective experience along two dimensions: valence and arousal. We select Twitter messages as input data set, as they provide a very large, diverse and freely available ensemble of emotions. Using hash-tags as labels, our methodology trains supervised classifiers to detect multiple classes of emotion on potentially huge data sets with no manual effort. We investigate the utility of several features for emotion detection, including unigrams, emoticons, negations and punctuations. To tackle the problem of sparse and high dimensional feature vectors of messages, we utilize a lexicon of emotions. We have compared the accuracy of several machine learning algorithms, including SVM, KNN, Decision Tree, and Naive Bayes for classifying Twitter messages. Our technique has an accuracy of over 90%, while demonstrating robustness across learning algorithms.",
"title": ""
},
{
"docid": "518b96236ffa2ce0413a0e01d280937a",
"text": "In this paper, we propose a low-rank representation with symmetric constraint (LRRSC) method for robust subspace clustering. Given a collection of data points approximately drawn from multiple subspaces, the proposed technique can simultaneously recover the dimension and members of each subspace. LRRSC extends the original low-rank representation algorithm by integrating a symmetric constraint into the low-rankness property of high-dimensional data representation. The symmetric low-rank representation, which preserves the subspace structures of high-dimensional data, guarantees weight consistency for each pair of data points so that highly correlated data points of subspaces are represented together. Moreover, it can be efficiently calculated by solving a convex optimization problem. We provide a rigorous proof for minimizing the nuclear-norm regularized least square problem with a symmetric constraint. The affinity matrix for spectral clustering can be obtained by further exploiting the angular information of the principal directions of the symmetric low-rank representation. This is a critical step towards evaluating the memberships between data points. Experimental results on benchmark databases demonstrate the effectiveness and robustness of LRRSC compared with several state-of-the-art subspace clustering algorithms.",
"title": ""
},
{
"docid": "2074ab39d5cec1f9e645ff2ad457f3e3",
"text": "[Context and motivation] The current breakthrough of natural language processing (NLP) techniques can provide the requirements engineering (RE) community with powerful tools that can help addressing specific tasks of natural language (NL) requirements analysis, such as traceability, ambiguity detection and requirements classification, to name a few. [Question/problem] However, modern NLP techniques are mainly statistical, and need large NL requirements datasets, to support appropriate training, test and validation of the techniques. The RE community has experimented with NLP since long time, but datasets were often proprietary, or limited to few software projects for which requirements were publicly available. Hence, replication of the experiments and generalization have always been an issue. [Principal idea/results] Our near future commitment is to provide a publicly available NL requirements dataset. [Contribution] To this end, we are collecting requirements documents from the Web, and we are representing them in a common XML format. In this paper, we present the current version of the dataset, together with our agenda concerning formatting, extension, and annotation of the dataset.",
"title": ""
},
{
"docid": "8fac46b10cc8a439f9aa4eedfd2f413d",
"text": "How does a lack of sleep affect our brains? In contrast to the benefits of sleep, frameworks exploring the impact of sleep loss are relatively lacking. Importantly, the effects of sleep deprivation (SD) do not simply reflect the absence of sleep and the benefits attributed to it; rather, they reflect the consequences of several additional factors, including extended wakefulness. With a focus on neuroimaging studies, we review the consequences of SD on attention and working memory, positive and negative emotion, and hippocampal learning. We explore how this evidence informs our mechanistic understanding of the known changes in cognition and emotion associated with SD, and the insights it provides regarding clinical conditions associated with sleep disruption.",
"title": ""
},
{
"docid": "094dbd57522cb7b9b134b14852bea78b",
"text": "When encountering qualitative research for the first time, one is confronted with both the number of methods and the difficulty of collecting, analysing and presenting large amounts of data. In quantitative research, it is possible to make a clear distinction between gathering and analysing data. However, this distinction is not clear-cut in qualitative research. The objective of this paper is to provide insight for the novice researcher and the experienced researcher coming to grounded theory for the first time. For those who already have experience in the use of the method the paper provides further much needed discussion arising out of デエW マWデエラSげゲ ;Sラヮデキラミ キミ デエW I“ aキWノSく In this paper the authors present a practical application and illustrate how grounded theory method was applied to an interpretive case study research. The paper discusses grounded theory method and provides guidance for the use of the method in interpretive studies.",
"title": ""
},
{
"docid": "89dc7cad01e784f047774ab665fb53d4",
"text": "This paper studies a top-k hierarchical classification problem. In top-k classification, one is allowed to make k predictions and no penalty is incurred if at least one of k predictions is correct. In hierarchical classification, classes form a structured hierarchy, and misclassification costs depend on the relation between the correct class and the incorrect class in the hierarchy. Despite that the fact that both top-k classification and hierarchical classification have gained increasing interests, the two problems have always been studied separately. In this paper, we define a top-k hierarchical loss function using a real world application. We provide the Bayes-optimal solution that minimizes the expected top-k hierarchical misclassification cost. Via numerical experiments, we show that our solution outperforms two baseline methods that address only one of the two issues.",
"title": ""
},
{
"docid": "8401deada9010f05e3c9907a421d6760",
"text": "Heuristics evaluation is one of the common techniques being used for usability evaluation. The potential of HE has been explored in games design and development and later playability heuristics evaluation (PHE) is generated. PHE has been used in evaluating games. Issues in games usability covers forms of game usability, game interface, game mechanics, game narrative and game play. This general heuristics has the potential to be further explored in specific domain of games that is educational games. Combination of general heuristics of games (tailored based on specific domain) and education heuristics seems to be an excellent focus in order to evaluate the usability issues in educational games especially educational games produced in Malaysia.",
"title": ""
},
{
"docid": "3a5ac4dc112c079955104bda98f80b58",
"text": "This review examines vestibular compensation and vestibular rehabilitation from a unified translational research perspective. Laboratory studies illustrate neurobiological principles of vestibular compensation at the molecular, cellular and systems levels in animal models that inform vestibular rehabilitation practice. However, basic research has been hampered by an emphasis on 'naturalistic' recovery, with time after insult and drug interventions as primary dependent variables. The vestibular rehabilitation literature, on the other hand, provides information on how the degree of compensation can be shaped by specific activity regimens. The milestones of the early spontaneous static compensation mark the re-establishment of static gaze stability, which provides a common coordinate frame for the brain to interpret residual vestibular information in the context of visual, somatosensory and visceral signals that convey gravitoinertial information. Stabilization of the head orientation and the eye orientation (suppression of spontaneous nystagmus) appear to be necessary by not sufficient conditions for successful rehabilitation, and define a baseline for initiating retraining. The lessons from vestibular rehabilitation in animal models offer the possibility of shaping the recovery trajectory to identify molecular and genetic factors that can improve vestibular compensation.",
"title": ""
},
{
"docid": "cf52d720512c316dc25f8167d5571162",
"text": "BACKGROUND\nHidradenitis suppurativa (HS) is a chronic relapsing skin disease. Recent studies have shown promising results of anti-tumor necrosis factor-alpha treatment.\n\n\nOBJECTIVE\nTo compare the efficacy and safety of infliximab and adalimumab in the treatment of HS.\n\n\nMETHODS\nA retrospective study was performed to compare 2 cohorts of 10 adult patients suffering from severe, recalcitrant HS. In 2005, 10 patients were treated with infliximab intravenous (i.v.) (3 infusions of 5 mg/kg at weeks 0, 2, and 6). In 2009, 10 other patients were treated in the same hospital with adalimumab subcutaneous (s.c.) 40 mg every other week. Both cohorts were followed up for 1 year using identical evaluation methods [Sartorius score, quality of life index, reduction of erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP), patient and doctor global assessment, and duration of efficacy].\n\n\nRESULTS\nNineteen patients completed the study. In both groups, the severity of the HS diminished. Infliximab performed better in all aspects. The average Sartorius score was reduced to 54% of baseline for the infliximab group and 66% of baseline for the adalimumab group.\n\n\nCONCLUSIONS\nAdalimumab s.c. 40 mg every other week is less effective than infliximab i.v. 5 mg/kg at weeks 0, 2, and 6.",
"title": ""
},
{
"docid": "5af009ec32eeda769e309b0979f5fbd3",
"text": "A modified pole-and-and-knife (MPK) method of harvesting oil palms was designed and fabricated. The method was tested along with two existing methods, namely the bamboo pole-and-knife (BPK) and the single rope-and-cutlass (SRC) methods. Test results showed that the MPK method was superior to the other methods in reducing the time spent in searching for and collecting scattered loose fruits (and hence the harvesting time), increasing the recovery of scattered loose fruits, eliminating the waist problem of the fruit collectors and increasing the ease of transportation and use of the harvesting pole.",
"title": ""
},
{
"docid": "047480185afbea439eee2ee803b9d1f9",
"text": "The ability to perceive and analyze terrain is a key problem in mobile robot navigation. Terrain perception problems arise in planetary robotics, agriculture, mining, and, of course, self-driving cars. Here, we introduce the PTA (probabilistic terrain analysis) algorithm for terrain classification with a fastmoving robot platform. The PTA algorithm uses probabilistic techniques to integrate range measurements over time, and relies on efficient statistical tests for distinguishing drivable from nondrivable terrain. By using probabilistic techniques, PTA is able to accommodate severe errors in sensing, and identify obstacles with nearly 100% accuracy at speeds of up to 35mph. The PTA algorithm was an essential component in the DARPA Grand Challenge, where it enabled our robot Stanley to traverse the entire course in record time.",
"title": ""
},
{
"docid": "406d839d15c18ac9c462c5f5af6b10b7",
"text": "The Multiple Meanings of Open Government Data: Understanding Different Stakeholders and Their Perspectives Felipe Gonzalez-Zapata & Richard Heeks Centre for Development Informatics, University of Manchester, Manchester, M13 9PL, UK Corresponding author: Prof. Richard Heeks, Centre for Development Informatics, IDPM, SEED, University of Manchester, Manchester, M13 9PL, UK, +44-161-275-2870 [email protected]",
"title": ""
},
{
"docid": "e059d7e04c3dba8ed570ad1d72a647b5",
"text": "An electronic throttle is a low-power dc servo drive which positions the throttle plate. Its application in modern automotive engines leads to improvements in vehicle drivability, fuel economy, and emissions. Transmission friction and the return spring limp-home nonlinearity significantly affect the electronic throttle performance. The influence of these effects is analyzed by means of computer simulations, experiments, and analytical calculations. A dynamic friction model is developed in order to adequately capture the experimentally observed characteristics of the presliding-displacement and breakaway effects. The linear part of electronic throttle process model is also analyzed and experimentally identified. A nonlinear control strategy is proposed, consisting of a proportional-integral-derivative (PID) controller and a feedback compensator for friction and limp-home effects. The PID controller parameters are analytically optimized according to the damping optimum criterion. The proposed control strategy is verified by computer simulations and experiments.",
"title": ""
},
{
"docid": "6a196d894d94b194627f6e3c127c83fb",
"text": "The advantages provided to memory by the distribution of multiple practice or study opportunities are among the most powerful effects in memory research. In this paper, we critically review the class of theories that presume contextual or encoding variability as the sole basis for the advantages of distributed practice, and recommend an alternative approach based on the idea that some study events remind learners of other study events. Encoding variability theory encounters serious challenges in two important phenomena that we review here: superadditivity and nonmonotonicity. The bottleneck in such theories lies in the assumption that mnemonic benefits arise from the increasing independence, rather than interdependence, of study opportunities. The reminding model accounts for many basic results in the literature on distributed practice, readily handles data that are problematic for encoding variability theories, including superadditivity and nonmonotonicity, and provides a unified theoretical framework for understanding the effects of repetition and the effects of associative relationships on memory.",
"title": ""
},
{
"docid": "b2de2955568a37301828708e15b5ed15",
"text": "ISPRS and CNES announced the HRS (High Resolution Stereo) Scientific Assessment Program during the ISPRS Commission I Symposium in Denver in November 2002. 9 test areas throughout the world have been selected for this program. One of the test sites is located in Bavaria, Germany, for which the PI comes from DLR. For a second region, which is situated in Catalonia – Barcelona and surroundings – DLR has the role of a Co-Investigator. The goal is to derive a DEM from the along-track stereo data of the SPOT HRS sensor and to assess the accuracy by comparison with ground control points and DEM data of superior quality. For the derivation of the DEM, the stereo processing software, developed at DLR for the MOMS-2P three line stereo camera is used. As a first step, the interior and exterior orientation of the camera, delivered as ancillary data (DORIS and ULS) are extracted. According to CNES these data should lead to an absolute orientation accuracy of about 30 m. No bundle block adjustment with ground control is used in the first step of the photogrammetric evaluation. A dense image matching, using very dense positions as kernel centers provides the parallaxes. The quality of the matching is controlled by forward and backward matching of the two stereo partners using the local least squares matching method. Forward intersection leads to points in object space which are then interpolated to a DEM of the region in a regular grid. Additionally, orthoimages are generated from the images of the two looking directions. The orthoimage and DEM accuracy is determined by using the ground control points and the available DEM data of superior accuracy (DEM derived from laser data and/or classical airborne photogrammetry). DEM filtering methods are applied and a comparison to SRTM-DEMs is performed. It is shown that a fusion of the DEMs derived from optical and radar data leads to higher accuracies. In the second step ground control points are used for bundle adjustment to improve the exterior orientation and the absolute accuracy of the SPOT-DEM.",
"title": ""
},
{
"docid": "4ca5fec568185d3699c711cc86104854",
"text": "Attackers often create systems that automatically rewrite and reorder their malware to avoid detection. Typical machine learning approaches, which learn a classifier based on a handcrafted feature vector, are not sufficiently robust to such reorderings. We propose a different approach, which, similar to natural language modeling, learns the language of malware spoken through the executed instructions and extracts robust, time domain features. Echo state networks (ESNs) and recurrent neural networks (RNNs) are used for the projection stage that extracts the features. These models are trained in an unsupervised fashion. A standard classifier uses these features to detect malicious files. We explore a few variants of ESNs and RNNs for the projection stage, including Max-Pooling and Half-Frame models which we propose. The best performing hybrid model uses an ESN for the recurrent model, Max-Pooling for non-linear sampling, and logistic regression for the final classification. Compared to the standard trigram of events model, it improves the true positive rate by 98.3% at a false positive rate of 0.1%.",
"title": ""
},
{
"docid": "abda48a065aecbe34f86ce3490520402",
"text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.",
"title": ""
}
] | scidocsrr |
9a55767aba9c03100f383feb17188a74 | Isolated Swiss-Forward Three-Phase Rectifier With Resonant Reset | [
{
"docid": "ee6461f83cee5fdf409a130d2cfb1839",
"text": "This paper introduces a novel three-phase buck-type unity power factor rectifier appropriate for high power Electric Vehicle battery charging mains interfaces. The characteristics of the converter, named the Swiss Rectifier, including the principle of operation, modulation strategy, suitable control structure, and dimensioning equations are described in detail. Additionally, the proposed rectifier is compared to a conventional 6-switch buck-type ac-dc power conversion. According to the results, the Swiss Rectifier is the topology of choice for a buck-type PFC. Finally, the feasibility of the Swiss Rectifier concept for buck-type rectifier applications is demonstrated by means of a hardware prototype.",
"title": ""
}
] | [
{
"docid": "fe8f31db9c3e8cbe9d69e146c40abb49",
"text": "BACKGROUND\nRegular physical activity (PA) can be beneficial to pregnant women, however, many women do not adhere to current PA guidelines during the antenatal period. Patient and public involvement is essential when designing antenatal PA interventions in order to uncover the reasons for non-adherence and non-engagement with the behaviour, as well as determining what type of intervention would be acceptable. The aim of this research was to explore women's experiences of PA during a recent pregnancy, understand the barriers and determinants of antenatal PA and explore the acceptability of antenatal walking groups for further development.\n\n\nMETHODS\nSeven focus groups were undertaken with women who had given birth within the past five years. Focus groups were transcribed and analysed using a grounded theory approach. Relevant and related behaviour change techniques (BCTs), which could be applied to future interventions, were identified using the BCT taxonomy.\n\n\nRESULTS\nWomen's opinions and experiences of PA during pregnancy were categorised into biological/physical (including tiredness and morning sickness), psychological (fear of harm to baby and self-confidence) and social/environmental issues (including access to facilities). Although antenatal walking groups did not appear popular, women identified some factors which could encourage attendance (e.g. childcare provision) and some which could discourage attendance (e.g. walking being boring). It was clear that the personality of the walk leader would be extremely important in encouraging women to join a walking group and keep attending. Behaviour change technique categories identified as potential intervention components included social support and comparison of outcomes (e.g. considering pros and cons of behaviour).\n\n\nCONCLUSIONS\nWomen's experiences and views provided a range of considerations for future intervention development, including provision of childcare, involvement of a fun and engaging leader and a range of activities rather than just walking. These experiences and views relate closely to the Health Action Process Model which, along with BCTs, could be used to develop future interventions. The findings of this study emphasise the importance of involving the target population in intervention development and present the theoretical foundation for building an antenatal PA intervention to encourage women to be physically active throughout their pregnancies.",
"title": ""
},
{
"docid": "f6ba46b72139f61cfb098656d71553ed",
"text": "This paper introduces the Voice Conversion Octave Toolbox made available to the public as open source. The first version of the toolbox features tools for VTLN-based voice conversion supporting a variety of warping functions. The authors describe the implemented functionality and how to configure the included tools.",
"title": ""
},
{
"docid": "d92f9a08b608f895f004e69c7893f2f0",
"text": "Although research has determined that reactive oxygen species (ROS) function as signaling molecules in plant development, the molecular mechanism by which ROS regulate plant growth is not well known. An aba overly sensitive mutant, abo8-1, which is defective in a pentatricopeptide repeat (PPR) protein responsible for the splicing of NAD4 intron 3 in mitochondrial complex I, accumulates more ROS in root tips than the wild type, and the ROS accumulation is further enhanced by ABA treatment. The ABO8 mutation reduces root meristem activity, which can be enhanced by ABA treatment and reversibly recovered by addition of certain concentrations of the reducing agent GSH. As indicated by low ProDR5:GUS expression, auxin accumulation/signaling was reduced in abo8-1. We also found that ABA inhibits the expression of PLETHORA1 (PLT1) and PLT2, and that root growth is more sensitive to ABA in the plt1 and plt2 mutants than in the wild type. The expression of PLT1 and PLT2 is significantly reduced in the abo8-1 mutant. Overexpression of PLT2 in an inducible system can largely rescue root apical meristem (RAM)-defective phenotype of abo8-1 with and without ABA treatment. These results suggest that ABA-promoted ROS in the mitochondria of root tips are important retrograde signals that regulate root meristem activity by controlling auxin accumulation/signaling and PLT expression in Arabidopsis.",
"title": ""
},
{
"docid": "bc272e837f1071fabcc7056134bae784",
"text": "Parental vaccine hesitancy is a growing problem affecting the health of children and the larger population. This article describes the evolution of the vaccine hesitancy movement and the individual, vaccine-specific and societal factors contributing to this phenomenon. In addition, potential strategies to mitigate the rising tide of parent vaccine reluctance and refusal are discussed.",
"title": ""
},
{
"docid": "f55c9ef1e60afd326bebbb619452fd97",
"text": "With the flourish of the Web, online review is becoming a more and more useful and important information resource for people. As a result, automatic review mining and summarization has become a hot research topic recently. Different from traditional text summarization, review mining and summarization aims at extracting the features on which the reviewers express their opinions and determining whether the opinions are positive or negative. In this paper, we focus on a specific domain - movie review. A multi-knowledge based approach is proposed, which integrates WordNet, statistical analysis and movie knowledge. The experimental results show the effectiveness of the proposed approach in movie review mining and summarization.",
"title": ""
},
{
"docid": "42b6c55e48f58e3e894de84519cb6feb",
"text": "What social value do Likes on Facebook hold? This research examines peopleâs attitudes and behaviors related to receiving one-click feedback in social media. Likes and other kinds of lightweight affirmation serve as social cues of acceptance and maintain interpersonal relationships, but may mean different things to different people. Through surveys and de-identified, aggregated behavioral Facebook data, we find that in general, people care more about who Likes their posts than how many Likes they receive, desiring feedback most from close friends, romantic partners, and family members other than their parents. While most people do not feel strongly that receiving “enough” Likes is important, roughly two-thirds of posters regularly receive more than “enough.” We also note a “Like paradox,” a phenomenon in which peopleâs friends receive more Likes because their friends have more friends to provide those Likes. Individuals with lower levels of self-esteem and higher levels of self-monitoring are more likely to think that Likes are important and to feel bad if they do not receive “enough” Likes. The results inform product design and our understanding of how lightweight interactions shape our experiences online.",
"title": ""
},
{
"docid": "48fffb441a5e7f304554e6bdef6b659e",
"text": "The massive accumulation of genome-sequences in public databases promoted the proliferation of genome-level phylogenetic analyses in many areas of biological research. However, due to diverse evolutionary and genetic processes, many loci have undesirable properties for phylogenetic reconstruction. These, if undetected, can result in erroneous or biased estimates, particularly when estimating species trees from concatenated datasets. To deal with these problems, we developed GET_PHYLOMARKERS, a pipeline designed to identify high-quality markers to estimate robust genome phylogenies from the orthologous clusters, or the pan-genome matrix (PGM), computed by GET_HOMOLOGUES. In the first context, a set of sequential filters are applied to exclude recombinant alignments and those producing anomalous or poorly resolved trees. Multiple sequence alignments and maximum likelihood (ML) phylogenies are computed in parallel on multi-core computers. A ML species tree is estimated from the concatenated set of top-ranking alignments at the DNA or protein levels, using either FastTree or IQ-TREE (IQT). The latter is used by default due to its superior performance revealed in an extensive benchmark analysis. In addition, parsimony and ML phylogenies can be estimated from the PGM. We demonstrate the practical utility of the software by analyzing 170 Stenotrophomonas genome sequences available in RefSeq and 10 new complete genomes of Mexican environmental S. maltophilia complex (Smc) isolates reported herein. A combination of core-genome and PGM analyses was used to revise the molecular systematics of the genus. An unsupervised learning approach that uses a goodness of clustering statistic identified 20 groups within the Smc at a core-genome average nucleotide identity (cgANIb) of 95.9% that are perfectly consistent with strongly supported clades on the core- and pan-genome trees. In addition, we identified 16 misclassified RefSeq genome sequences, 14 of them labeled as S. maltophilia, demonstrating the broad utility of the software for phylogenomics and geno-taxonomic studies. The code, a detailed manual and tutorials are freely available for Linux/UNIX servers under the GNU GPLv3 license at https://github.com/vinuesa/get_phylomarkers. A docker image bundling GET_PHYLOMARKERS with GET_HOMOLOGUES is available at https://hub.docker.com/r/csicunam/get_homologues/, which can be easily run on any platform.",
"title": ""
},
{
"docid": "67136c5bd9277e0637393e9a131d7b53",
"text": "BACKGROUND\nSynchronous written conversations (or \"chats\") are becoming increasingly popular as Web-based mental health interventions. Therefore, it is of utmost importance to evaluate and summarize the quality of these interventions.\n\n\nOBJECTIVE\nThe aim of this study was to review the current evidence for the feasibility and effectiveness of online one-on-one mental health interventions that use text-based synchronous chat.\n\n\nMETHODS\nA systematic search was conducted of the databases relevant to this area of research (Medical Literature Analysis and Retrieval System Online [MEDLINE], PsycINFO, Central, Scopus, EMBASE, Web of Science, IEEE, and ACM). There were no specific selection criteria relating to the participant group. Studies were included if they reported interventions with individual text-based synchronous conversations (ie, chat or text messaging) and a psychological outcome measure.\n\n\nRESULTS\nA total of 24 articles were included in this review. Interventions included a wide range of mental health targets (eg, anxiety, distress, depression, eating disorders, and addiction) and intervention design. Overall, compared with the waitlist (WL) condition, studies showed significant and sustained improvements in mental health outcomes following synchronous text-based intervention, and post treatment improvement equivalent but not superior to treatment as usual (TAU) (eg, face-to-face and telephone counseling).\n\n\nCONCLUSIONS\nFeasibility studies indicate substantial innovation in this area of mental health intervention with studies utilizing trained volunteers and chatbot technologies to deliver interventions. While studies of efficacy show positive post-intervention gains, further research is needed to determine whether time requirements for this mode of intervention are feasible in clinical practice.",
"title": ""
},
{
"docid": "8f0b7554ff0d9f6bf0d1cf8579dc2893",
"text": "Recent advances in Convolutional Neural Networks (CNNs) have obtained promising results in difficult deep learning tasks. However, the success of a CNN depends on finding an architecture to fit a given problem. A hand-crafted architecture is a challenging, time-consuming process that requires expert knowledge and effort, due to a large number of architectural design choices. In this article, we present an efficient framework that automatically designs a high-performing CNN architecture for a given problem. In this framework, we introduce a new optimization objective function that combines the error rate and the information learnt by a set of feature maps using deconvolutional networks (deconvnet). The new objective function allows the hyperparameters of the CNN architecture to be optimized in a way that enhances the performance by guiding the CNN through better visualization of learnt features via deconvnet. The actual optimization of the objective function is carried out via the Nelder-Mead Method (NMM). Further, our new objective function results in much faster convergence towards a better architecture. The proposed framework has the ability to explore a CNN architecture’s numerous design choices in an efficient way and also allows effective, distributed execution and synchronization via web services. Empirically, we demonstrate that the CNN architecture designed with our approach outperforms several existing approaches in terms of its error rate. Our results are also competitive with state-of-the-art results on the MNIST dataset and perform reasonably against the state-of-the-art results on CIFAR-10 and CIFAR-100 datasets. Our approach has a significant role in increasing the depth, reducing the size of strides, and constraining some convolutional layers not followed by pooling layers in order to find a CNN architecture that produces a high recognition performance.",
"title": ""
},
{
"docid": "ccf7390abc2924e4d2136a2b82639115",
"text": "The proposition of increased innovation in network applications and reduced cost for network operators has won over the networking world to the vision of software-defined networking (SDN). With the excitement of holistic visibility across the network and the ability to program network devices, developers have rushed to present a range of new SDN-compliant hardware, software, and services. However, amidst this frenzy of activity, one key element has only recently entered the debate: Network Security. In this paper, security in SDN is surveyed presenting both the research community and industry advances in this area. The challenges to securing the network from the persistent attacker are discussed, and the holistic approach to the security architecture that is required for SDN is described. Future research directions that will be key to providing network security in SDN are identified.",
"title": ""
},
{
"docid": "e34815efa68cb1b7a269e436c838253d",
"text": "A new mobile robot prototype for inspection of overhead transmission lines is proposed. The mobile platform is composed of 3 arms. And there is a motorized rubber wheel on the end of each arm. On the two end arms, a gripper is designed to clamp firmly onto the conductors from below to secure the robot. Each arm has a motor to achieve 2 degrees of freedom which is realized by moving along a curve. It could roll over some obstacles (compression splices, vibration dampers, etc). And the robot could clear other types of obstacles (spacers, suspension clamps, etc).",
"title": ""
},
{
"docid": "e45c921effd9b5026f34ff738b63c48c",
"text": "We consider the problem of weakly supervised learning for object localization. Given a collection of images with image-level annotations indicating the presence/absence of an object, our goal is to localize the object in each image. We propose a neural network architecture called the attention network for this problem. Given a set of candidate regions in an image, the attention network first computes an attention score on each candidate region in the image. Then these candidate regions are combined together with their attention scores to form a whole-image feature vector. This feature vector is used for classifying the image. The object localization is implicitly achieved via the attention scores on candidate regions. We demonstrate that our approach achieves superior performance on several benchmark datasets.",
"title": ""
},
{
"docid": "db2553268fc3ccaddc3ec7077514655c",
"text": "Aspect extraction is a task to abstract the common properties of objects from corpora discussing them, such as reviews of products. Recent work on aspect extraction is leveraging the hierarchical relationship between products and their categories. However, such effort focuses on the aspects of child categories but ignores those from parent categories. Hence, we propose an LDA-based generative topic model inducing the two-layer categorical information (CAT-LDA), to balance the aspects of both a parent category and its child categories. Our hypothesis is that child categories inherit aspects from parent categories, controlled by the hierarchy between them. Experimental results on 5 categories of Amazon.com products show that both common aspects of parent category and the individual aspects of subcategories can be extracted to align well with the common sense. We further evaluate the manually extracted aspects of 16 products, resulting in an average hit rate of 79.10%.",
"title": ""
},
{
"docid": "6e07085f81dc4f6892e0f2aba7a8dcdd",
"text": "With the rapid growth in the number of spiraling network users and the increase in the use of communication technologies, the multi-server environment is the most common environment for widely deployed applications. Reddy et al. recently showed that Lu et al.'s biometric-based authentication scheme for multi-server environment was insecure, and presented a new authentication and key-agreement scheme for the multi-server. Reddy et al. continued to assert that their scheme was more secure and practical. After a careful analysis, however, their scheme still has vulnerabilities to well-known attacks. In this paper, the vulnerabilities of Reddy et al.'s scheme such as the privileged insider and user impersonation attacks are demonstrated. A proposal is then presented of a new biometric-based user authentication scheme for a key agreement and multi-server environment. Lastly, the authors demonstrate that the proposed scheme is more secure using widely accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool, and that it serves to satisfy all of the required security properties.",
"title": ""
},
{
"docid": "b5b7bef8ec2d38bb2821dc380a3a49bf",
"text": "Maternal uniparental disomy (UPD) 7 is found in approximately 5% of patients with Silver-Russell syndrome. By a descriptive and comparative clinical analysis of all published cases (more than 60 to date) their phenotype is updated and compared with the clinical findings in patients with Sliver-Russell syndrome (SRS) of either unexplained etiology or epimutations of the imprinting center region 1 (ICR1) on 11p15. The higher frequency of relative macrocephaly and high forehead/frontal bossing makes the face of patients with epimutations of the ICR1 on 11p15 more distinctive than the face of cases with SRS of unexplained etiology or maternal UPD 7. Because of the distinct micrognathia in the latter, their triangular facial gestalt is more pronounced than in the other groups. However, solely by clinical findings patients with maternal UPD 7 cannot be discriminated unambiguously from patients with epimutations of the ICR1 on 11p15 or SRS of unexplained etiology. Therefore, both loss of methylation of the ICR1 on 11p15 and maternal UPD 7 should be investigated for if SRS is suspected.",
"title": ""
},
{
"docid": "82779e315cf982b56ed14396603ae251",
"text": "The selection of drain current, inversion coefficient, and channel length for each MOS device in an analog circuit results in significant tradeoffs in performance. The selection of inversion coefficient, which is a numerical measure of MOS inversion, enables design freely in weak, moderate, and strong inversion and facilitates optimum design. Here, channel width required for layout is easily found and implicitly considered in performance expressions. This paper gives hand expressions motivated by the EKV MOS model and measured data for MOS device performance, inclusive of velocity saturation and other small-geometry effects. A simple spreadsheet tool is then used to predict MOS device performance and map this into complete circuit performance. Tradeoffs and optimization of performance are illustrated by the design of three, 0.18-mum CMOS operational transconductance amplifiers optimized for DC, balanced, and AC performance. Measured performance shows significant tradeoffs in voltage gain, output resistance, transconductance bandwidth, input-referred flicker noise and offset voltage, and layout area.",
"title": ""
},
{
"docid": "b49a8894277278256b6c1430bb4e4a91",
"text": "In the past years, several support vector machines (SVM) novelty detection approaches have been applied on the network intrusion detection field. The main advantage of these approaches is that they can characterize normal traffic even when trained with datasets containing not only normal traffic but also a number of attacks. Unfortunately, these algorithms seem to be accurate only when the normal traffic vastly outnumbers the number of attacks present in the dataset. A situation which can not be always hold This work presents an approach for autonomous labeling of normal traffic as a way of dealing with situations where class distribution does not present the imbalance required for SVM algorithms. In this case, the autonomous labeling process is made by SNORT, a misuse-based intrusion detection system. Experiments conducted on the 1998 DARPA dataset show that the use of the proposed autonomous labeling approach not only outperforms existing SVM alternatives but also, under some attack distributions, obtains improvements over SNORT itself.",
"title": ""
},
{
"docid": "4d5e8e1c8942256088f1c5ef0e122c9f",
"text": "Cybercrime and cybercriminal activities continue to impact communities as the steady growth of electronic information systems enables more online business. The collective views of sixty-six computer users and organizations, that have an exposure to cybercrime, were analyzed using concept analysis and mapping techniques in order to identify the major issues and areas of concern, and provide useful advice. The findings of the study show that a range of computing stakeholders have genuine concerns about the frequency of information security breaches and malware incursions (including the emergence of dangerous security and detection avoiding malware), the need for e-security awareness and education, the roles played by law and law enforcement, and the installation of current security software and systems. While not necessarily criminal in nature, some stakeholders also expressed deep concerns over the use of computers for cyberbullying, particularly where younger and school aged users are involved. The government’s future directions and recommendations for the technical and administrative management of cybercriminal activity were generally observed to be consistent with stakeholder concerns, with some users also taking practical steps to reduce cybercrime risks. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "b23e141ca479abecab2b00f13141b9b3",
"text": "The prediction of movement time in human-computer interfaces as undertaken using Fitts' law is reviewed. Techniques for model building are summarized and three refinements to improve the theoretical and empirical accuracy of the law are presented. Refinements include (1) the Shannon formulation for the index of task difficulty, (2) new interpretations of \"target width\" for twoand three-dimensional tasks, and (3) a technique for normalizing error rates across experimental factors . Finally, a detailed application example is developed showing the potential of Fitts' law to predict and compare the performance of user interfaces before designs are finalized.",
"title": ""
},
{
"docid": "c034cb6e72bc023a60b54d0f8316045a",
"text": "This thesis presents the design, implementation, and valid ation of a system that enables a micro air vehicle to autonomously explore and map unstruct u ed and unknown indoor environments. Such a vehicle would be of considerable use in many real-world applications such as search and rescue, civil engineering inspection, an d a host of military tasks where it is dangerous or difficult to send people. While mapping and exploration capabilities are common for ground vehicles today, air vehicles seeking t o achieve these capabilities face unique challenges. While there has been recent progres s toward sensing, control, and navigation suites for GPS-denied flight, there have been few demonstrations of stable, goal-directed flight in real environments. The main focus of this research is the development of real-ti me state estimation techniques that allow our quadrotor helicopter to fly autonomous ly in indoor, GPS-denied environments. Accomplishing this feat required the developm ent of a large integrated system that brought together many components into a cohesive packa ge. As such, the primary contribution is the development of the complete working sys tem. I show experimental results that illustrate the MAV’s ability to navigate accurat ely in unknown environments, and demonstrate that our algorithms enable the MAV to operate au tonomously in a variety of indoor environments. Thesis Supervisor: Nicholas Roy Title: Associate Professor of Aeronautics and Astronautic s",
"title": ""
}
] | scidocsrr |
872e08afa64afcdb8a0268c4fe1bc9ac | Byzantine Chain Replication | [
{
"docid": "e2b74db574db8001dace37cbecb8c4eb",
"text": "Distributed key-value stores are now a standard component of high-performance web services and cloud computing applications. While key-value stores offer significant performance and scalability advantages compared to traditional databases, they achieve these properties through a restricted API that limits object retrieval---an object can only be retrieved by the (primary and only) key under which it was inserted. This paper presents HyperDex, a novel distributed key-value store that provides a unique search primitive that enables queries on secondary attributes. The key insight behind HyperDex is the concept of hyperspace hashing in which objects with multiple attributes are mapped into a multidimensional hyperspace. This mapping leads to efficient implementations not only for retrieval by primary key, but also for partially-specified secondary attribute searches and range queries. A novel chaining protocol enables the system to achieve strong consistency, maintain availability and guarantee fault tolerance. An evaluation of the full system shows that HyperDex is 12-13x faster than Cassandra and MongoDB for finding partially specified objects. Additionally, HyperDex achieves 2-4x higher throughput for get/put operations.",
"title": ""
}
] | [
{
"docid": "434ec0510dc38ea2e7effabe8090d4ce",
"text": "Purpose: Big data analytics (BDA) increasingly provide value to firms for robust decision making and solving business problems. The purpose of this paper is to explore information quality dynamics in big data environment linking business value, user satisfaction and firm performance. Design/methodology/approach: Drawing on the appraisal-emotional response-coping framework, the authors propose a theory on information quality dynamics that helps in achieving business value, user satisfaction and firm performance with big data strategy and implementation. Information quality from BDA is conceptualized as the antecedent to the emotional response (e.g. value and satisfaction) and coping (performance). Proposed information quality dynamics are tested using data collected from 302 business analysts across various organizations in France and the USA. Findings: The findings suggest that information quality in BDA reflects four significant dimensions: completeness, currency, format and accuracy. The overall information quality has significant, positive impact on firm performance which is mediated by business value (e.g. transactional, strategic and transformational) and user satisfaction. Research limitations/implications: On the one hand, this paper shows how to operationalize information quality, business value, satisfaction and firm performance in BDA using PLS-SEM. On the other hand, it proposes an REBUS-PLS algorithm to automatically detect three groups of users sharing the same behaviors when determining the information quality perceptions of BDA. Practical implications: The study offers a set of determinants for information quality and business value in BDA projects, in order to support managers in their decision to enhance user satisfaction and firm performance. Originality/value: The paper extends big data literature by offering an appraisal-emotional response-coping framework that is well fitted for information quality modeling on firm performance. The methodological novelty lies in embracing REBUS-PLS to handle unobserved heterogeneity in the sample. Disciplines Business Publication Details Fosso Wamba, S., Akter, S., Trinchera, L. & De Bourmont, M. (2018). Turning information quality into firm performance in the big data economy. Management Decision, Online First 1-28. This journal article is available at Research Online: https://ro.uow.edu.au/gsbpapers/536",
"title": ""
},
{
"docid": "fb099587aea7f8090a4b8fd8fc2d72df",
"text": "This paper provides a review of explanations, visualizations and interactive elements of user interfaces (UI) in music recommendation systems. We call these UI features “recommendation aids”. Explanations are elements of the interface that inform the user why a certain recommendation was made. We highlight six possible goals for explanations, resulting in overall satisfaction towards the system. We found that the most of the existing music recommenders of popular systems provide no explanations, or very limited ones. Since explanations are not independent of other UI elements in recommendation process, we consider how the other elements can be used to achieve the same goals. To this end, we evaluated several existing music recommenders. We wanted to discover which of the six goals (transparency, scrutability, effectiveness, persuasiveness, efficiency and trust) the different UI elements promote in the existing music recommenders, and how they could be measured in order to create a simple framework for evaluating recommender UIs. By using this framework designers of recommendation systems could promote users’ trust and overall satisfaction towards a recommender system thereby improving the user experience with the system.",
"title": ""
},
{
"docid": "914daf0fd51e135d6d964ecbe89a5b29",
"text": "Large-scale parallel programming environments and algorithms require efficient group-communication on computing systems with failing nodes. Existing reliable broadcast algorithms either cannot guarantee that all nodes are reached or are very expensive in terms of the number of messages and latency. This paper proposes Corrected-Gossip, a method that combines Monte Carlo style gossiping with a deterministic correction phase, to construct a Las Vegas style reliable broadcast that guarantees reaching all the nodes at low cost. We analyze the performance of this method both analytically and by simulations and show how it reduces the latency and network load compared to existing algorithms. Our method improves the latency by 20% and the network load by 53% compared to the fastest known algorithm on 4,096 nodes. We believe that the principle of corrected-gossip opens an avenue for many other reliable group communication operations.",
"title": ""
},
{
"docid": "5ca1c503cba0db452d0e5969e678db97",
"text": "Deep neural network models have recently achieved state-of-the-art performance gains in a variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria, 2017). However, these gains rely on the availability of large amounts of annotated examples, without which state-of-the-art performance is rarely achievable. This is especially inconvenient for the many NLP fields where annotated examples are scarce, such as medical text. To improve NLP models in this situation, we evaluate five improvements on named entity recognition (NER) tasks when only ten annotated examples are available: (1) layer-wise initialization with pre-trained weights, (2) hyperparameter tuning, (3) combining pre-training data, (4) custom word embeddings, and (5) optimizing out-of-vocabulary (OOV) words. Experimental results show that the F1 score of 69.3% achievable by state-of-the-art models can be improved to 78.87%.",
"title": ""
},
{
"docid": "289005e2f4d666a606f7dfd9c8f7a1f4",
"text": "In this paper we present the design of a fin-like dielectric elastomer actuator (DEA) that drives a miniature autonomous underwater vehicle (AUV). The fin-like actuator is modular and independent of the body of the AUV. All electronics required to run the actuator are inside the 100 mm long 3D-printed body, allowing for autonomous mobility of the AUV. The DEA is easy to manufacture, requires no pre-stretch of the elastomers, and is completely sealed for underwater operation. The output thrust force can be tuned by stacking multiple actuation layers and modifying the Young's modulus of the elastomers. The AUV is reconfigurable by a shift of its center of mass, such that both planar and vertical swimming can be demonstrated on a single vehicle. For the DEA we measured thrust force and swimming speed for various actuator designs ran at frequencies from 1 Hz to 5 Hz. For the AUV we demonstrated autonomous planar swimming and closed-loop vertical diving. The actuators capable of outputting the highest thrust forces can power the AUV to swim at speeds of up to 0.55 body lengths per second. The speed falls in the upper range of untethered swimming robots powered by soft actuators. Our tunable DEAs also demonstrate the potential to mimic the undulatory motions of fish fins.",
"title": ""
},
{
"docid": "313a902049654e951860b9225dc5f4e8",
"text": "Financial portfolio management is the process of constant redistribution of a fund into different financial products. This paper presents a financial-model-free Reinforcement Learning framework to provide a deep machine learning solution to the portfolio management problem. The framework consists of the Ensemble of Identical Independent Evaluators (EIIE) topology, a Portfolio-Vector Memory (PVM), an Online Stochastic Batch Learning (OSBL) scheme, and a fully exploiting and explicit reward function. This framework is realized in three instants in this work with a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory (LSTM). They are, along with a number of recently reviewed or published portfolio-selection strategies, examined in three back-test experiments with a trading period of 30 minutes in a cryptocurrency market. Cryptocurrencies are electronic and decentralized alternatives to government-issued money, with Bitcoin as the best-known example of a cryptocurrency. All three instances of the framework monopolize the top three positions in all experiments, outdistancing other compared trading algorithms. Although with a high commission rate of 0.25% in the backtests, the framework is able to achieve at least 4-fold returns in 50 days.",
"title": ""
},
{
"docid": "066b4130dbc9c36d244e5da88936dfc4",
"text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.",
"title": ""
},
{
"docid": "6fdd045448a1425ec1b9ac5d9bca9fa0",
"text": "Fluorescence has been observed directly across the band gap of semiconducting carbon nanotubes. We obtained individual nanotubes, each encased in a cylindrical micelle, by ultrasonically agitating an aqueous dispersion of raw single-walled carbon nanotubes in sodium dodecyl sulfate and then centrifuging to remove tube bundles, ropes, and residual catalyst. Aggregation of nanotubes into bundles otherwise quenches the fluorescence through interactions with metallic tubes and substantially broadens the absorption spectra. At pH less than 5, the absorption and emission spectra of individual nanotubes show evidence of band gap-selective protonation of the side walls of the tube. This protonation is readily reversed by treatment with base or ultraviolet light.",
"title": ""
},
{
"docid": "794ad922f93b85e2195b3c85665a8202",
"text": "The paper shows how to create a probabilistic graph for WordNet. A node is created for every word and phrase in WordNet. An edge between two nodes is labeled with the probability that a user that is interested in the source concept will also be interested in the destination concept. For example, an edge with weight 0.3 between \"canine\" and \"dog\" indicates that there is a 30% probability that a user who searches for \"canine\" will be interested in results that contain the word \"dog\". We refer to the graph as probabilistic because we enforce the constraint that the sum of the weights of all the edges that go out of a node add up to one. Structural (e.g., the word \"canine\" is a hypernym (i.e., kind of) of the word \"dog\") and textual (e.g., the word \"canine\" appears in the textual definition of the word \"dog\") data from WordNet is used to create a Markov logic network, that is, a set of first order formulas with probabilities. The Markov logic network is then used to compute the weights of the edges in the probabilistic graph. We experimentally validate the quality of the data in the probabilistic graph on two independent benchmarks: Miller and Charles and WordSimilarity-353.",
"title": ""
},
{
"docid": "c4f9b3c863323efd6eca0074c296addf",
"text": "Lip reading, the ability to recognize text information from the movement of a speaker’s mouth, is a difficult and challenging task. Recently, the end-to-end model that maps a variable-length sequence of video frames to text performs poorly in real life situation where people unintentionally move the lips instead of speaking. The goal of this work is to improve the performance of lip reading task in real life. The model proposed in this article consists of two networks that are visual to audio feature network and audio feature to text network. Our experiments showed that the model proposed in this article can achieve 92.76% accuracy in lip reading task on the dataset that the unintentional lips movement was added.",
"title": ""
},
{
"docid": "9b08be9d250822850fda92819774248e",
"text": "In recent years, recommendation systems have been widely used in various commercial platforms to provide recommendations for users. Collaborative filtering algorithms are one of the main algorithms used in recommendation systems. Such algorithms are simple and efficient; however, the sparsity of the data and the scalability of the method limit the performance of these algorithms, and it is difficult to further improve the quality of the recommendation results. Therefore, a model combining a collaborative filtering recommendation algorithm with deep learning technology is proposed, therein consisting of two parts. First, the model uses a feature representation method based on a quadric polynomial regression model, which obtains the latent features more accurately by improving upon the traditional matrix factorization algorithm. Then, these latent features are regarded as the input data of the deep neural network model, which is the second part of the proposed model and is used to predict the rating scores. Finally, by comparing with other recommendation algorithms on three public datasets, it is verified that the recommendation performance can be effectively improved by our model.",
"title": ""
},
{
"docid": "5bf3c1f19c368c1948db91bbd65da84b",
"text": "As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, nonspecialist users find it hard to create useful mental models of robot reasoning solely from observed behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where transparency may have a wider range of effects on trust and utility depending on the application and purpose of the robot. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.",
"title": ""
},
{
"docid": "2089349f4f1dae4d07dfec8481ba748e",
"text": "A signiicant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, Trepan, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that Trepan is able to produce decision trees that maintain a high level of delity to their respective networks while being com-prehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.",
"title": ""
},
{
"docid": "647ede4f066516a0343acef725e51d01",
"text": "This work proposes a dual-polarized planar antenna; two post-wall slotted waveguide arrays with orthogonal 45/spl deg/ linearly-polarized waves interdigitally share the aperture on a single layer substrate. Uniform excitation of the two-dimensional slot array is confirmed by experiment in the 25 GHz band. The isolation between two slot arrays is also investigated in terms of the relative displacement along the radiation waveguide axis in the interdigital structure. The isolation is 33.0 dB when the relative shift of slot position between the two arrays is -0.5/spl lambda//sub g/, while it is only 12.8 dB when there is no shift. The cross-polarization level in the far field is -25.2 dB for a -0.5/spl lambda//sub g/ shift, which is almost equal to that of the isolated single polarization array. It is degraded down to -9.6 dB when there is no shift.",
"title": ""
},
{
"docid": "038637eebbf8474bf15dab1c9a81ed6d",
"text": "As the surplus market of failure analysis equipment continues to grow, the cost of performing invasive IC analysis continues to diminish. Hardware vendors in high-security applications utilize security by obscurity to implement layers of protection on their devices. High-security applications must assume that the attacker is skillful, well-equipped and well-funded. Modern security ICs are designed to make readout of decrypted data and changes to security configuration of the device impossible. Countermeasures such as meshes and attack sensors thwart many state of the art attacks. Because of the perceived difficulty and lack of publicly known attacks, the IC backside has largely been ignored by the security community. However, the backside is currently the weakest link in modern ICs because no devices currently on the market are protected against fully-invasive attacks through the IC backside. Fully-invasive backside attacks circumvent all known countermeasures utilized by modern implementations. In this work, we demonstrate the first two practical fully-invasive attacks against the IC backside. Our first attack is fully-invasive backside microprobing. Using this attack we were able to capture decrypted data directly from the data bus of the target IC's CPU core. We also present a fully invasive backside circuit edit. With this attack we were able to set security and configuration fuses of the device to arbitrary values.",
"title": ""
},
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
},
{
"docid": "d4869ee3fbc997f865cc16e9e1200d0b",
"text": "The potential of mathematical models is widely acknowledged for examining components and interactions of natural systems, estimating the changes and uncertainties on outcomes, and fostering communication between scientists with different backgrounds and between scientists, managers and the community. For favourable reception of models, a systematic accrual of a good knowledge base is crucial for both science and decision-making. As the roles of models grow in importance, there is an increase in the need for appropriate methods with which to test their quality and performance. For biophysical models, the heterogeneity of data and the range of factors influencing usefulness of their outputs often make it difficult for full analysis and assessment. As a result, modelling studies in the domain of natural sciences often lack elements of good modelling practice related to model validation, that is correspondence of models to its intended purpose. Here we review validation issues and methods currently available for assessing the quality of biophysical models. The review covers issues of validation purpose, the robustness of model results, data quality, model prediction and model complexity. The importance of assessing input data quality and interpretation of phenomena is also addressed. Details are then provided on the range of measures commonly used for validation. Requirements for a methodology for assessment during the entire model-cycle are synthesised. Examples are used from a variety of modelling studies which mainly include agronomic modelling, e.g. crop growth and development, climatic modelling, e.g. climate scenarios, and hydrological modelling, e.g. soil hydrology, but the principles are essentially applicable to any area. It is shown that conducting detailed validation requires multi-faceted knowledge, and poses substantial scientific and technical challenges. Special emphasis is placed on using combined multiple statistics to expand our horizons in validation whilst also tailoring the validation requirements to the specific objectives of the application.",
"title": ""
},
{
"docid": "22160219ffa40e4e42f1519fe25ecb6a",
"text": "We propose a new prior distribution for classical (non-hierarchical) logistic regression models, constructed by first scaling all nonbinary variables to have mean 0 and standard deviation 0.5, and then placing independent Student-t prior distributions on the coefficients. As a default choice, we recommend the Cauchy distribution with center 0 and scale 2.5, which in the simplest setting is a longer-tailed version of the distribution attained by assuming one-half additional success and one-half additional failure in a logistic regression. Cross-validation on a corpus of datasets shows the Cauchy class of prior distributions to outperform existing implementations of Gaussian and Laplace priors. We recommend this prior distribution as a default choice for routine applied use. It has the advantage of always giving answers, even when there is complete separation in logistic regression (a common problem, even when the sample size is large and the number of predictors is small) and also automatically applying more shrinkage to higherorder interactions. This can be useful in routine data analysis as well as in automated procedures such as chained equations for missing-data imputation. We implement a procedure to fit generalized linear models in R with the Student-t prior distribution by incorporating an approximate EM algorithm into the usual iteratively weighted least squares. We illustrate with several applications, including a series of logistic regressions predicting voting preferences, a small bioassay experiment, and an imputation model for a public health data set.",
"title": ""
},
{
"docid": "0824992bb506ac7c8a631664bf608086",
"text": "There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images. Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods. Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level. Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method. The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set. An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level.",
"title": ""
},
{
"docid": "5995a2775a6a10cf4f2bd74a2959935d",
"text": "Artemisinin-based combination therapy is recommended to treat Plasmodium falciparum worldwide, but observations of longer artemisinin (ART) parasite clearance times (PCTs) in Southeast Asia are widely interpreted as a sign of potential ART resistance. In search of an in vitro correlate of in vivo PCT after ART treatment, a ring-stage survival assay (RSA) of 0–3 h parasites was developed and linked to polymorphisms in the Kelch propeller protein (K13). However, RSA remains a laborious process, involving heparin, Percoll gradient, and sorbitol treatments to obtain rings in the 0–3 h window. Here two alternative RSA protocols are presented and compared to the standard Percoll-based method, one highly stage-specific and one streamlined for laboratory application. For all protocols, P. falciparum cultures were synchronized with 5 % sorbitol treatment twice over two intra-erythrocytic cycles. For a filtration-based RSA, late-stage schizonts were passed through a 1.2 μm filter to isolate merozoites, which were incubated with uninfected erythrocytes for 45 min. The erythrocytes were then washed to remove lysis products and further incubated until 3 h post-filtration. Parasites were pulsed with either 0.1 % dimethyl sulfoxide (DMSO) or 700 nM dihydroartemisinin in 0.1 % DMSO for 6 h, washed twice in drug-free media, and incubated for 66–90 h, when survival was assessed by microscopy. For a sorbitol-only RSA, synchronized young (0–3 h) rings were treated with 5 % sorbitol once more prior to the assay and adjusted to 1 % parasitaemia. The drug pulse, incubation, and survival assessment were as described above. Ring-stage survival of P. falciparum parasites containing either the K13 C580 or C580Y polymorphism (associated with low and high RSA survival, respectively) were assessed by the described filtration and sorbitol-only methods and produced comparable results to the reported Percoll gradient RSA. Advantages of both new methods include: fewer reagents, decreased time investment, and fewer procedural steps, with enhanced stage-specificity conferred by the filtration method. Assessing P. falciparum ART sensitivity in vitro via RSA can be streamlined and accurately evaluated in the laboratory by filtration or sorbitol synchronization methods, thus increasing the accessibility of the assay to research groups.",
"title": ""
}
] | scidocsrr |
43fda67994521863cf18d5b59f1c239d | Re-ranking Person Re-identification with k-Reciprocal Encoding | [
{
"docid": "2bc30693be1c5855a9410fb453128054",
"text": "Person re-identification is to match pedestrian images from disjoint camera views detected by pedestrian detectors. Challenges are presented in the form of complex variations of lightings, poses, viewpoints, blurring effects, image resolutions, camera settings, occlusions and background clutter across camera views. In addition, misalignment introduced by the pedestrian detector will affect most existing person re-identification methods that use manually cropped pedestrian images and assume perfect detection. In this paper, we propose a novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter. All the key components are jointly optimized to maximize the strength of each component when cooperating with others. In contrast to existing works that use handcrafted features, our method automatically learns features optimal for the re-identification task from data. The learned filter pairs encode photometric transforms. Its deep architecture makes it possible to model a mixture of complex photometric and geometric transforms. We build the largest benchmark re-id dataset with 13, 164 images of 1, 360 pedestrians. Unlike existing datasets, which only provide manually cropped pedestrian images, our dataset provides automatically detected bounding boxes for evaluation close to practical applications. Our neural network significantly outperforms state-of-the-art methods on this dataset.",
"title": ""
}
] | [
{
"docid": "141c28bfbeb5e71dc68d20b6220794c3",
"text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.",
"title": ""
},
{
"docid": "14ba02b92184c21cbbe2344313e09c23",
"text": "Smart meters are at high risk to be an attack target or to be used as an attacking means of malicious users because they are placed at the closest location to users in the smart gridbased infrastructure. At present, Korea is proceeding with 'Smart Grid Advanced Metering Infrastructure (AMI) Construction Project', and has selected Device Language Message Specification/ COmpanion Specification for Energy Metering (DLMS/COSEM) protocol for the smart meter communication. However, the current situation is that the vulnerability analysis technique is still insufficient to be applied to DLMS/COSEM-based smart meters. Therefore, we propose a new fuzzing architecture for analyzing vulnerabilities which is applicable to actual DLMS/COSEM-based smart meter devices. In addition, this paper presents significant case studies for verifying proposed fuzzing architecture through conducting the vulnerability analysis of the experimental results from real DLMS/COSEM-based smart meter devices used in Korea SmartGrid Testbed.",
"title": ""
},
{
"docid": "dc8d9a7da61aab907ee9def56dfbd795",
"text": "The ability to detect change-points in a dynamic network or a time series of graphs is an increasingly important task in many applications of the emerging discipline of graph signal processing. This paper formulates change-point detection as a hypothesis testing problem in terms of a generative latent position model, focusing on the special case of the Stochastic Block Model time series. We analyze two classes of scan statistics, based on distinct underlying locality statistics presented in the literature. Our main contribution is the derivation of the limiting properties and power characteristics of the competing scan statistics. Performance is compared theoretically, on synthetic data, and empirically, on the Enron email corpus.",
"title": ""
},
{
"docid": "446af0ad077943a77ac4a38fd84df900",
"text": "We investigate the manufacturability of 20-nm double-gate and FinFET devices in integrated circuits by projecting process tolerances. Two important factors affecting the sensitivity of device electrical parameters to physical variations were quantitatively considered. The quantum effect was computed using the density gradient method and the sensitivity of threshold voltage to random dopant fluctuation was studied by Monte Carlo simulation. Our results show the 3 value ofVT variation caused by discrete impurity fluctuation can be greater than 100%. Thus, engineering the work function of gate materials and maintaining a nearly intrinsic channel is more desirable. Based on a design with an intrinsic channel and ideal gate work function, we analyzed the sensitivity of device electrical parameters to several important physical fluctuations such as the variations in gate length, body thickness, and gate dielectric thickness. We found that quantum effects have great impact on the performance of devices. As a result, the device electrical behavior is sensitive to small variations of body thickness. The effect dominates over the effects produced by other physical fluctuations. To achieve a relative variation of electrical parameters comparable to present practice in industry, we face a challenge of fin width control (less than 1 nm 3 value of variation) for the 20-nm FinFET devices. The constraint of the gate length variation is about 10 15%. We estimate a tolerance of 1 2 A 3 value of oxide thickness variation and up to 30% front-back oxide thickness mismatch.",
"title": ""
},
{
"docid": "41aa05455471ecd660599f4ec285ff29",
"text": "The recent progress of human parsing techniques has been largely driven by the availability of rich data resources. In this work, we demonstrate some critical discrepancies between the current benchmark datasets and the real world human parsing scenarios. For instance, all the human parsing datasets only contain one person per image, while usually multiple persons appear simultaneously in a realistic scene. It is more practically demanded to simultaneously parse multiple persons, which presents a greater challenge to modern human parsing methods. Unfortunately, absence of relevant data resources severely impedes the development of multiple-human parsing methods. To facilitate future human parsing research, we introduce the Multiple-Human Parsing (MHP) dataset, which contains multiple persons in a real world scene per single image. The MHP dataset contains various numbers of persons (from 2 to 16) per image with 18 semantic classes for each parsing annotation. Persons appearing in the MHP images present sufficient variations in pose, occlusion and interaction. To tackle the multiple-human parsing problem, we also propose a novel Multiple-Human Parser (MH-Parser), which considers both the global context and local cues for each person in the parsing process. The model is demonstrated to outperform the naive “detect-and-parse” approach by a large margin, which will serve as a solid baseline and help drive the future research in real world human parsing.",
"title": ""
},
{
"docid": "c215a497d39f4f95a9fc720debb14b05",
"text": "Adding frequency reconfigurability to a compact metamaterial-inspired antenna is investigated. The antenna is a printed monopole with an incorporated slot and is fed by a coplanar waveguide (CPW) line. This antenna was originally inspired from the concept of negative-refractive-index metamaterial transmission lines and exhibits a dual-band behavior. By using a varactor diode, the lower band (narrowband) of the antenna, which is due to radiation from the incorporated slot, can be tuned over a broad frequency range, while the higher band (broadband) remains effectively constant. A detailed equivalent circuit model is developed that predicts the frequency-tuning behavior for the lower band of the antenna. The circuit model shows the involvement of both CPW even and odd modes in the operation of the antenna. Experimental results show that, for a varactor diode capacitance approximately ranging from 0.1-0.7 pF, a tuning range of 1.6-2.23 GHz is achieved. The size of the antenna at the maximum frequency is 0.056 λ0 × 0.047 λ0 and the antenna is placed over a 0.237 λ0 × 0.111 λ0 CPW ground plane (λ0 being the wavelength in vacuum).",
"title": ""
},
{
"docid": "d8d102c3d6ac7d937bb864c69b4d3cd9",
"text": "Question Answering (QA) systems are becoming the inspiring model for the future of search engines. While recently, underlying datasets for QA systems have been promoted from unstructured datasets to structured datasets with highly semantic-enriched metadata, but still question answering systems involve serious challenges which cause to be far beyond desired expectations. In this paper, we raise the challenges for building a Question Answering (QA) system especially with the focus of employing structured data (i.e. knowledge graph). This paper provide an exhaustive insight of the known challenges, so far. Thus, it helps researchers to easily spot open rooms for the future research agenda.",
"title": ""
},
{
"docid": "c3a6a72c9d738656f356d67cd5ce6c47",
"text": "Most doors are controlled by persons with the use of keys, security cards, password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems: namely face detection, face recognition and automatic door access control. Face detection is the process of detecting the region of face in an image. The face is detected by using the viola jones method and face recognition is implemented by using the Principal Component Analysis (PCA). Face Recognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized, it is known, else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand, alarm will ring for the unknown person. Since PCA reduces the dimensions of face images without losing important features, facial images for many persons can be stored in the database. Although many training images are used, computational efficiency cannot be decreased significantly. Therefore, face recognition using PCA can be more useful for door security system than other face recognition schemes.",
"title": ""
},
{
"docid": "78ce06926ea3b2012277755f0916fbb7",
"text": "We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because \"those who cannot remember the past are condemned to repeat it.\" This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.",
"title": ""
},
{
"docid": "d8e60dc8378fe39f698eede2b6687a0f",
"text": "Today's complex software systems are neither secure nor reliable. The rudimentary software protection primitives provided by current hardware forces systems to run many distrusting software components (e.g., procedures, libraries, plugins, modules) in the same protection domain, or otherwise suffer degraded performance from address space switches.\n We present CODOMs (COde-centric memory DOMains), a novel architecture that can provide finer-grained isolation between software components with effectively zero run-time overhead, all at a fraction of the complexity of other approaches. An implementation of CODOMs in a cycle-accurate full-system x86 simulator demonstrates that with the right hardware support, finer-grained protection and run-time performance can peacefully coexist.",
"title": ""
},
{
"docid": "dd211105651b376b40205eb16efe1c25",
"text": "WBAN based medical-health technologies have great potential for continuous monitoring in ambulatory settings, early detection of abnormal conditions, and supervised rehabilitation. They can provide patients with increased confidence and a better quality of life, and promote healthy behavior and health awareness. Continuous monitoring with early detection likely has the potential to provide patients with an increased level of confidence, which in turn may improve quality of life. In addition, ambulatory monitoring will allow patients to engage in normal activities of daily life, rather than staying at home or close to specialized medical services. Last but not least, inclusion of continuous monitoring data into medical databases will allow integrated analysis of all data to optimize individualized care and provide knowledge discovery through integrated data mining. Indeed, with the current technological trend toward integration of processors and wireless interfaces, we will soon have coin-sized intelligent sensors. They will be applied as skin patches, seamlessly integrated into a personal monitoring system, and worn for extended periods of time.",
"title": ""
},
{
"docid": "8b7cb051224008ba3e1bf91bac5e9d21",
"text": "The Internet of things aspires to connect anyone with anything at any point of time at any place. Internet of Thing is generally made up of three-layer architecture. Namely Perception, Network and Application layers. A lot of security principles should be enabled at each layer for proper and efficient working of these applications. This paper represents the overview of Security principles, Security Threats and Security challenges at the application layer and its countermeasures to overcome those challenges. The Application layer plays an important role in all of the Internet of Thing applications. The most widely used application layer protocol is MQTT. The security threats for Application Layer Protocol MQTT is particularly selected and evaluated. Comparison is done between different Application layer protocols and security measures for those protocols. Due to the lack of common standards for IoT protocols, a lot of issues are considered while choosing the particular protocol.",
"title": ""
},
{
"docid": "79465d290ab299b9d75e9fa617d30513",
"text": "In this paper we describe computational experience in solving unconstrained quadratic zero-one problems using a branch and bound algorithm. The algorithm incorporates dynamic preprocessing techniques for forcing variables and heuristics to obtain good starting points. Computational results and comparisons with previous studies on several hundred test problems with dimensions up to 200 demonstrate the efficiency of our algorithm. In dieser Arbeit beschreiben wir rechnerische Erfahrungen bei der Lösung von unbeschränkten quadratischen Null-Eins-Problemen mit einem “Branch and Bound”-Algorithmus. Der Algorithmus erlaubt dynamische Vorbereitungs-Techniken zur Erzwingung ausgewählter Variablen und Heuristiken zur Wahl von guten Startpunkten. Resultate von Berechnungen und Vergleiche mit früheren Arbeiten mit mehreren hundert Testproblemen mit Dimensionen bis 200 zeigen die Effizienz unseres Algorithmus.",
"title": ""
},
{
"docid": "b27b164a7ff43b8f360167e5f886f18a",
"text": "Segmentation and grouping of image elements is required to proceed with image recognition. Due to the fact that the images are two dimensional (2D) representations of the real three dimensional (3D) scenes, the information of the third dimension, like geometrical relations between the objects that are important for reasonable segmentation and grouping, are lost in 2D image representations. Computer stereo vision implies on understanding information stored in 3D-scene. Techniques for stereo computation are observed in this paper. The methods for solving the correspondence problem in stereo image matching are presented. The process of 3D-scene reconstruction from stereo image pairs and extraction of parameters important for image understanding are described. Occluded and surrounding areas in stereo image pairs are stressed out as important for image understanding.",
"title": ""
},
{
"docid": "4cc71db87682a96ddee09e49a861142f",
"text": "BACKGROUND\nReadiness is an integral and preliminary step in the successful implementation of telehealth services into existing health systems within rural communities.\n\n\nMETHODS AND MATERIALS\nThis paper details and critiques published international peer-reviewed studies that have focused on assessing telehealth readiness for rural and remote health. Background specific to readiness and change theories is provided, followed by a critique of identified telehealth readiness models, including a commentary on their readiness assessment tools.\n\n\nRESULTS\nFour current readiness models resulted from the search process. The four models varied across settings, such as rural outpatient practices, hospice programs, rural communities, as well as government agencies, national associations, and organizations. All models provided frameworks for readiness tools. Two specifically provided a mechanism by which communities could be categorized by their level of telehealth readiness.\n\n\nDISCUSSION\nCommon themes across models included: an appreciation of practice context, strong leadership, and a perceived need to improve practice. Broad dissemination of these telehealth readiness models and tools is necessary to promote awareness and assessment of readiness. This will significantly aid organizations to facilitate the implementation of telehealth.",
"title": ""
},
{
"docid": "44fee78f33e4d5c6d9c8b0126b1d5830",
"text": "This paper discusses an industrial case study in which data mining has been applied to solve a quality engineering problem in electronics assembly. During the assembly process, solder balls occur underneath some components of printed circuit boards. The goal is to identify the cause of solder defects in a circuit board using a data mining approach. Statistical process control and design of experiment approaches did not provide conclusive results. The paper discusses features considered in the study, data collected, and the data mining solution approach to identify causes of quality faults in an industrial application.",
"title": ""
},
{
"docid": "9ba6a2042e99c3ace91f0fc017fa3fdd",
"text": "This paper proposes a two-element multi-input multi-output (MIMO) open-slot antenna implemented on the display ground plane of a laptop computer for eight-band long-term evolution/wireless wide-area network operations. The metal surroundings of the antennas have been well integrated as a part of the radiation structure. In the single-element open-slot antenna, the nearby hinge slot (which is bounded by two ground planes and two hinges) is relatively large as compared with the open slot itself and acts as a good radiator. In the MIMO antenna consisting of two open-slot elements, a T slot is embedded in the display ground plane and is connected to the hinge slot. The T and hinge slots when connected behave as a radiator; whereas, the T slot itself functions as an isolation element. With the isolation element, simulated isolations between the two elements of the MIMO antenna are raised from 8.3–11.2 to 15–17.1 dB in 698–960 MHz and from 12.1–21 to 15.9–26.7 dB in 1710–2690 MHz. Measured isolations with the isolation element in the desired low- and high-frequency ranges are 17.6–18.8 and 15.2–23.5 dB, respectively. Measured and simulated efficiencies for the two-element MIMO antenna with either element excited are both larger than 50% in the desired operating frequency bands.",
"title": ""
},
{
"docid": "ad3add7522b3a58359d36e624e9e65f7",
"text": "In this paper, global and local prosodic features extracted from sentence, word and syllables are proposed for speech emotion or affect recognition. In this work, duration, pitch, and energy values are used to represent the prosodic information, for recognizing the emotions from speech. Global prosodic features represent the gross statistics such as mean, minimum, maximum, standard deviation, and slope of the prosodic contours. Local prosodic features represent the temporal dynamics in the prosody. In this work, global and local prosodic features are analyzed separately and in combination at different levels for the recognition of emotions. In this study, we have also explored the words and syllables at different positions (initial, middle, and final) separately, to analyze their contribution towards the recognition of emotions. In this paper, all the studies are carried out using simulated Telugu emotion speech corpus (IITKGP-SESC). These results are compared with the results of internationally known Berlin emotion speech corpus (Emo-DB). Support vector machines are used to develop the emotion recognition models. The results indicate that, the recognition performance using local prosodic features is better compared to the performance of global prosodic features. Words in the final position of the sentences, syllables in the final position of the words exhibit more emotion discriminative information compared to the words and syllables present in the other positions. K.S. Rao ( ) · S.G. Koolagudi · R.R. Vempada School of Information Technology, Indian Institute of Technology Kharagpur, Kharagpur 721302, West Bengal, India e-mail: [email protected] S.G. Koolagudi e-mail: [email protected] R.R. Vempada e-mail: [email protected]",
"title": ""
},
{
"docid": "33ed6ab1eef74e6ba6649ff5a85ded6b",
"text": "With the rapid increasing of smart phones and their embedded sensing technologies, mobile crowd sensing (MCS) becomes an emerging sensing paradigm for performing large-scale sensing tasks. One of the key challenges of large-scale mobile crowd sensing systems is how to effectively select the minimum set of participants from the huge user pool to perform the tasks and achieve certain level of coverage. In this paper, we introduce a new MCS architecture which leverages the cached sensing data to fulfill partial sensing tasks in order to reduce the size of selected participant set. We present a newly designed participant selection algorithm with caching and evaluate it via extensive simulations with a real-world mobile dataset.",
"title": ""
},
{
"docid": "f13ffbb31eedcf46df1aaecfbdf61be9",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] | scidocsrr |
bef6b341dc12a62d9166bd111e7344e0 | HOT BUTTONS AND TIME SINKS: THE EFFECTS OF ELECTRONIC COMMUNICATION DURING NONWORK TIME ON EMOTIONS AND WORK-NONWORK CONFLICT | [
{
"docid": "eb4c25caba8c3e6f06d3cabe6c004cd5",
"text": "The greater power of bad events over good ones is found in everyday events, major life events (e.g., trauma), close relationship outcomes, social network patterns, interpersonal interactions, and learning processes. Bad emotions, bad parents, and bad feedback have more impact than good ones, and bad information is processed more thoroughly than good. The self is more motivated to avoid bad self-definitions than to pursue good ones. Bad impressions and bad stereotypes are quicker to form and more resistant to disconfirmation than good ones. Various explanations such as diagnosticity and salience help explain some findings, but the greater power of bad events is still found when such variables are controlled. Hardly any exceptions (indicating greater power of good) can be found. Taken together, these findings suggest that bad is stronger than good, as a general principle across a broad range of psychological phenomena.",
"title": ""
}
] | [
{
"docid": "59eaa9f4967abdc1c863f8fb256ae966",
"text": "CONTEXT\nThe projected expansion in the next several decades of the elderly population at highest risk for Parkinson disease (PD) makes identification of factors that promote or prevent the disease an important goal.\n\n\nOBJECTIVE\nTo explore the association of coffee and dietary caffeine intake with risk of PD.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nData were analyzed from 30 years of follow-up of 8004 Japanese-American men (aged 45-68 years) enrolled in the prospective longitudinal Honolulu Heart Program between 1965 and 1968.\n\n\nMAIN OUTCOME MEASURE\nIncident PD, by amount of coffee intake (measured at study enrollment and 6-year follow-up) and by total dietary caffeine intake (measured at enrollment).\n\n\nRESULTS\nDuring follow-up, 102 men were identified as having PD. Age-adjusted incidence of PD declined consistently with increased amounts of coffee intake, from 10.4 per 10,000 person-years in men who drank no coffee to 1.9 per 10,000 person-years in men who drank at least 28 oz/d (P<.001 for trend). Similar relationships were observed with total caffeine intake (P<.001 for trend) and caffeine from non-coffee sources (P=.03 for trend). Consumption of increasing amounts of coffee was also associated with lower risk of PD in men who were never, past, and current smokers at baseline (P=.049, P=.22, and P=.02, respectively, for trend). Other nutrients in coffee, including niacin, were unrelated to PD incidence. The relationship between caffeine and PD was unaltered by intake of milk and sugar.\n\n\nCONCLUSIONS\nOur findings indicate that higher coffee and caffeine intake is associated with a significantly lower incidence of PD. This effect appears to be independent of smoking. The data suggest that the mechanism is related to caffeine intake and not to other nutrients contained in coffee. JAMA. 2000;283:2674-2679.",
"title": ""
},
{
"docid": "152ef51d5264a2e681acefcc536da7cf",
"text": "BACKGROUND AND PURPOSE\nHeart rate variability (HRV) as a measure of autonomic function might provide prognostic information in ischemic stroke. However, numerous difficulties are associated with HRV parameters assessment and interpretation, especially in short-term ECG recordings. For better understanding of derived HRV data and to avoid methodological bias we simultaneously recorded and analyzed heart rate, blood pressure and respiratory rate.\n\n\nMETHODS\nSeventy-five ischemic stroke patients underwent short-term ECG recordings. Linear and nonlinear parameters of HRV as well as beat-to-beat blood pressure and respiratory rate were assessed and compared in patients with different functional neurological outcomes at 7th and 90th days.\n\n\nRESULTS\nValues of Approximate, Sample and Fuzzy Entropy were significantly lower in patients with poor early neurological outcome. Patients with poor 90-day outcome had higher percentage of high frequency spectrum and normalized high frequency power, lower normalized low frequency power and lower low frequency/high frequency ratio. Low frequency/high frequency ratio correlated negatively with scores in the National Institutes of Health Stroke Scale and modified Rankin Scale (mRS) at the 7th and mRS at the 90th days. Mean RR interval, values of blood pressure as well as blood pressure variability did not differ between groups with good and poor outcomes. Respiratory frequency was significantly correlated with the functional neurological outcome at 7th and 90th days.\n\n\nCONCLUSION\nWhile HRV assessed by linear methods seems to have long-term prognostic value, complexity measures of HRV reflect the impact of the neurological state on distinct, temporary properties of heart rate dynamic. Respiratory rate during the first days of the stroke is associated with early and long-term neurological outcome and should be further investigated as a potential risk factor.",
"title": ""
},
{
"docid": "bf9e56e0e125e922de95381fb5520569",
"text": "Today, many private households as well as broadcasting or film companies own large collections of digital music plays. These are time series that differ from, e.g., weather reports or stocks market data. The task is normally that of classification, not prediction of the next value or recognizing a shape or motif. New methods for extracting features that allow to classify audio data have been developed. However, the development of appropriate feature extraction methods is a tedious effort, particularly because every new classification task requires tailoring the feature set anew. This paper presents a unifying framework for feature extraction from value series. Operators of this framework can be combined to feature extraction methods automatically, using a genetic programming approach. The construction of features is guided by the performance of the learning classifier which uses the features. Our approach to automatic feature extraction requires a balance between the completeness of the methods on one side and the tractability of searching for appropriate methods on the other side. In this paper, some theoretical considerations illustrate the trade-off. After the feature extraction, a second process learns a classifier from the transformed data. The practical use of the methods is shown by two types of experiments: classification of genres and classification according to user preferences.",
"title": ""
},
{
"docid": "a500afda393ad60ddd1bb39778655172",
"text": "The success and the failure of a data warehouse (DW) project are mainly related to the design phase according to most researchers in this domain. When analyzing the decision-making system requirements, many recurring problems appear and requirements modeling difficulties are detected. Also, we encounter the problem associated with the requirements expression by non-IT professionals and non-experts makers on design models. The ambiguity of the term of decision-making requirements leads to a misinterpretation of the requirements resulting from data warehouse design failure and incorrect OLAP analysis. Therefore, many studies have focused on the inclusion of vague data in information systems in general, but few studies have examined this case in data warehouses. This article describes one of the shortcomings of current approaches to data warehouse design which is the study of in the requirements inaccuracy expression and how ontologies can help us to overcome it. We present a survey on this topic showing that few works that take into account the imprecision in the study of this crucial phase in the decision-making process for the presentation of challenges and problems that arise and requires more attention by researchers to improve DW design. According to our knowledge, no rigorous study of vagueness in this area were made. Keywords— Data warehouses Design, requirements analysis, imprecision, ontology",
"title": ""
},
{
"docid": "a7b6a491d85ae94285808a21dbc65ce9",
"text": "In imbalanced learning, most standard classification algorithms usually fail to properly represent data distribution and provide unfavorable classification performance. More specifically, the decision rule of minority class is usually weaker than majority class, leading to many misclassification of expensive minority class data. Motivated by our previous work ADASYN [1], this paper presents a novel kernel based adaptive synthetic over-sampling approach, named KernelADASYN, for imbalanced data classification problems. The idea is to construct an adaptive over-sampling distribution to generate synthetic minority class data. The adaptive over-sampling distribution is first estimated with kernel density estimation methods and is further weighted by the difficulty level for different minority class data. The classification performance of our proposed adaptive over-sampling approach is evaluated on several real-life benchmarks, specifically on medical and healthcare applications. The experimental results show the competitive classification performance for many real-life imbalanced data classification problems.",
"title": ""
},
{
"docid": "fce58bfa94acf2b26a50f816353e6bf2",
"text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.",
"title": ""
},
{
"docid": "fe20c0bee35db1db85968b4d2793b83b",
"text": "The Smule Ocarina is a wind instrument designed for the iPhone, fully leveraging its wide array of technologies: microphone input (for breath input), multitouch (for fingering), accelerometer, real-time sound synthesis, highperformance graphics, GPS/location, and persistent data connection. In this mobile musical artifact, the interactions of the ancient flute-like instrument are both preserved and transformed via breath-control and multitouch finger-holes, while the onboard global positioning and persistent data connection provide the opportunity to create a new social experience, allowing the users of Ocarina to listen to one another. In this way, Ocarina is also a type of social instrument that enables a different, perhaps even magical, sense of global connectivity.",
"title": ""
},
{
"docid": "d95ae6900ae353fa0ed32167e0c23f16",
"text": "As well known, fully convolutional network (FCN) becomes the state of the art for semantic segmentation in deep learning. Currently, new hardware designs for deep learning have focused on improving the speed and parallelism of processing units. This motivates memristive solutions, in which the memory units (i.e., memristors) have computing capabilities. However, designing a memristive deep learning network is challenging, since memristors work very differently from the traditional CMOS hardware. This paper proposes a complete solution to implement memristive FCN (MFCN). Voltage selectors are firstly utilized to realize max-pooling layers with the detailed MFCN deconvolution hardware circuit by the massively parallel structure, which is effective since the deconvolution kernel and the input feature are similar in size. Then, deconvolution calculation is realized by converting the image into a column matrix and converting the deconvolution kernel into a sparse matrix. Meanwhile, the convolution realization in MFCN is also studied with the traditional sliding window method rather than the large matrix theory to overcome the shortcoming of low efficiency. Moreover, the conductance values of memristors are predetermined in Tensorflow with ex-situ training method. In other words, we train MFCN in software, then download the trained parameters to the simulink system by writing memristor. The effectiveness of the designed MFCN scheme is verified with improved accuracy over some existing machine learning methods. The proposed scheme is also adapt to LFW dataset with three-classification tasks. However, the MFCN training is time consuming as the computational burden is heavy with thousands of weight parameters with just six layers. In future, it is necessary to sparsify the weight parameters and layers of the MFCN network to speed up computing.",
"title": ""
},
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "a8287a99def9fec3a9a2fda06a95e36e",
"text": "The abstraction of a process enables certain primitive forms of communication during process creation and destruction such as wait(). However, the operating system provides more general mechanisms for flexible inter-process communication. In this paper, we have studied and evaluated three commonly-used inter-process communication devices pipes, sockets and shared memory. We have identified the various factors that could affect their performance such as message size, hardware caches and process scheduling, and constructed experiments to reliably measure the latency and transfer rate of each device. We identified the most reliable timer APIs available for our measurements. Our experiments reveal that shared memory provides the lowest latency and highest throughput, followed by kernel pipes and lastly, TCP/IP sockets. However, the latency trends provide interesting insights into the construction of each mechanism. We also make certain observations on the pros and cons of each mechanism, highlighting its usefulness for different kinds of applications.",
"title": ""
},
{
"docid": "2e35483beb568ab514601ba21d70c2d3",
"text": "Determining the intended sense of words in text – word sense disambiguation (WSD) – is a long-standing problem in natural language processing. In this paper, we present WSD algorithms which use neural network language models to achieve state-of-the-art precision. Each of these methods learns to disambiguate word senses using only a set of word senses, a few example sentences for each sense taken from a licensed lexicon, and a large unlabeled text corpus. We classify based on cosine similarity of vectors derived from the contexts in unlabeled query and labeled example sentences. We demonstrate state-of-the-art results when using the WordNet sense inventory, and significantly better than baseline performance using the New Oxford American Dictionary inventory. The best performance was achieved by combining an LSTM language model with graph label propagation.",
"title": ""
},
{
"docid": "c180a56ae8ab74cd6a77f9f47ee76544",
"text": "Existing graph-based ranking methods for keyphrase extraction compute a single importance score for each word via a single random walk. Motivated by the fact that both documents and words can be represented by a mixture of semantic topics, we propose to decompose traditional random walk into multiple random walks specific to various topics. We thus build a Topical PageRank (TPR) on word graph to measure word importance with respect to different topics. After that, given the topic distribution of the document, we further calculate the ranking scores of words and extract the top ranked ones as keyphrases. Experimental results show that TPR outperforms state-of-the-art keyphrase extraction methods on two datasets under various evaluation metrics.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "2c25a1333dc94bf98c74b693997e2793",
"text": "In recent years, HCI has shown a rising interest in the creative practices associated with massive online communities, including crafters, hackers, DIY, and other expert amateurs. One strategy for researching creativity at this scale is through an analysis of a community's outputs, including its creative works, custom created tools, and emergent practices. In this paper, we offer one such case study, a historical account of World of Warcraft (WoW) machinima (i.e., videos produced inside of video games), which shows how the aesthetic needs and requirements of video making community coevolved with the community-made creativity support tools in use at the time. We view this process as inhabiting different layers and practices of appropriation, and through an analysis of them, we trace the ways that support for emerging stylistic conventions become built into creativity support tools over time.",
"title": ""
},
{
"docid": "e0d553cc4ca27ce67116c62c49c53d23",
"text": "We estimate a vehicle's speed, its wheelbase length, and tire track length by jointly estimating its acoustic wave pattern with a single passive acoustic sensor that records the vehicle's drive-by noise. The acoustic wave pattern is determined using the vehicle's speed, the Doppler shift factor, the sensor's distance to the vehicle's closest-point-of-approach, and three envelope shape (ES) components, which approximate the shape variations of the received signal's power envelope. We incorporate the parameters of the ES components along with estimates of the vehicle engine RPM, the number of cylinders, and the vehicle's initial bearing, loudness and speed to form a vehicle profile vector. This vector provides a fingerprint that can be used for vehicle identification and classification. We also provide possible reasons why some of the existing methods are unable to provide unbiased vehicle speed estimates using the same framework. The approach is illustrated using vehicle speed estimation and classification results obtained with field data.",
"title": ""
},
{
"docid": "ea2d97e8bde8e21b8291c370ce5815bf",
"text": "Can the cell's perception of time be expressed through the length of the shortest telomere? To address this question, we analyze an asymmetric random walk that models telomere length for each division that can decrease by a fixed length a or, if recognized by a polymerase, it increases by a fixed length b ≫ a. Our analysis of the model reveals two phases, first, a determinist drift of the length toward a quasi-equilibrium state, and second, persistence of the length near an attracting state for the majority of divisions. The measure of stability of the latter phase is the expected number of divisions at the attractor (\"lifetime\") prior to crossing a threshold T that model senescence. Using numerical simulations, we further study the distribution of times for the shortest telomere to reach the threshold T. We conclude that the telomerase regulates telomere stability by creating an effective potential barrier that separates statistically the arrival time of the shortest from the next shortest to T. The present model explains how random telomere dynamics underlies the extension of cell survival time.",
"title": ""
},
{
"docid": "46209913057e33c17d38a565e50097a3",
"text": "Power-on reset circuits are available as discrete devices as well as on-chip solutions and are indispensable to initialize some critical nodes of analog and digital designs during power-on. In this paper, we present a power-on reset circuit specifically designed for on-chip applications. The mentioned POR circuit should meet certain design requirements necessary to be integrated on-chip, some of them being area-efficiency, power-efficiency, supply rise-time insensitivity and ambient temperature insensitivity. The circuit is implemented within a small area (60mum times 35mum) using the 2.5V tolerant MOSFETs of a 0.28mum CMOS technology. It has a maximum quiescent current consumption of 40muA and works over infinite range of supply rise-times and ambient temperature range of -40degC to 150degC",
"title": ""
},
{
"docid": "91446020934f6892a3a4807f5a7b3829",
"text": "Collaborative filtering recommends items to a user based on the interests of other users having similar preferences. However, high dimensional, sparse data result in poor performance in collaborative filtering. This paper introduces an approach called multiple metadata-based collaborative filtering (MMCF), which utilizes meta-level information to alleviate this problem, e.g., metadata such as genre, director, and actor in the case of movie recommendation. MMCF builds a k-partite graph of users, movies and multiple metadata, and extracts implicit relationships among the metadata and between users and the metadata. Then the implicit relationships are propagated further by applying random walk process in order to alleviate the problem of sparseness in the original data set. The experimental results show substantial improvement over previous approaches on the real Netflix movie dataset.",
"title": ""
},
{
"docid": "ee9cb495280dc6e252db80c23f2f8c2b",
"text": "Due to the dramatical increase in popularity of mobile devices in the last decade, more sensitive user information is stored and accessed on these devices everyday. However, most existing technologies for user authentication only cover the login stage or only work in restricted controlled environments or GUIs in the post login stage. In this work, we present TIPS, a Touch based Identity Protection Service that implicitly and unobtrusively authenticates users in the background by continuously analyzing touch screen gestures in the context of a running application. To the best of our knowledge, this is the first work to incorporate contextual app information to improve user authentication. We evaluate TIPS over data collected from 23 phone owners and deployed it to 13 of them with 100 guest users. TIPS can achieve over 90% accuracy in real-life naturalistic conditions within a small amount of computational overhead and 6% of battery usage.",
"title": ""
},
{
"docid": "d78acb79ccd229af7529dae1408dea6a",
"text": "Making recommendations by learning to rank is becoming an increasingly studied area. Approaches that use stochastic gradient descent scale well to large collaborative filtering datasets, and it has been shown how to approximately optimize the mean rank, or more recently the top of the ranked list. In this work we present a family of loss functions, the k-order statistic loss, that includes these previous approaches as special cases, and also derives new ones that we show to be useful. In particular, we present (i) a new variant that more accurately optimizes precision at k, and (ii) a novel procedure of optimizing the mean maximum rank, which we hypothesize is useful to more accurately cover all of the user's tastes. The general approach works by sampling N positive items, ordering them by the score assigned by the model, and then weighting the example as a function of this ordered set. Our approach is studied in two real-world systems, Google Music and YouTube video recommendations, where we obtain improvements for computable metrics, and in the YouTube case, increased user click through and watch duration when deployed live on www.youtube.com.",
"title": ""
}
] | scidocsrr |
19e407b8d995f901f24f776c36cc6bf9 | Image quality quantification for fingerprints using quality-impairment assessment | [
{
"docid": "c1b79f29ce23b2d0ba97928831302e18",
"text": "Quality assessment of biometric fingerprint images is necessary to ensure high biometric performance in biometric recognition systems. We relate the quality of a fingerprint sample to the biometric performance to ensure an objective and performance oriented benchmark. The proposed quality metric is based on Gabor filter responses and is evaluated against eight contemporary quality estimation methods on four datasets using sample utility derived from the separation of genuine and imposter distributions as benchmark. The proposed metric shows performance and consistency approaching that of the composite NFIQ quality assessment algorithm and is thus a candidate for inclusion in a feature vector introducing the NFIQ 2.0 metric.",
"title": ""
},
{
"docid": "1a9be0a664da314c143ca430bd6f4502",
"text": "Fingerprint image quality is an important factor in the perf ormance of Automatic Fingerprint Identification Systems(AFIS). It is used to evaluate the system performance, assess enrollment acceptability, and evaluate fingerprint sensors. This paper presents a novel methodology for fingerp rint image quality measurement. We propose limited ring-wedge spectral measu r to estimate the global fingerprint image features, and inhomogeneity with d rectional contrast to estimate local fingerprint image features. Experimental re sults demonstrate the effectiveness of our proposal.",
"title": ""
}
] | [
{
"docid": "32417703b8291a5cdcc3c9eaabbdb99c",
"text": "Purpose – The aim of this paper is to identify the quality determinants for education services provided by higher education institutions (HEIs) in Greece and to measure their relative importance from the students’ points of view. Design/mthodology/approach – A multi-criteria decision-making methodology was used for assessing the relative importance of quality determinants that affect student satisfaction. More specifically, the analytical hierarchical process (AHP) was used in order to measure the relative weight of each quality factor. Findings – The relative weights of the factors that contribute to the quality of educational services as it is perceived by students was measured. Research limitations/implications – The research is based on the questionnaire of the Hellenic Quality Assurance Agency for Higher Education. This implies that the measured weights are related mainly to questions posed in this questionnaire. However, the applied method (AHP) can be used to assess different quality determinants. Practical implications – The outcome of this study can be used in order to quantify internal quality assessment of HEIs. More specifically, the outcome can be directly used by HEIs for assessing quality as perceived by students. Originality/value – The paper attempts to develop insights into comparative evaluations of quality determinants as they are perceived by students.",
"title": ""
},
{
"docid": "f8b24b0e8b440643a5fb49166cbbd96b",
"text": "A Proportional-Integral (PI) based Maximum Power Point Tracking (MPPT) control algorithm is proposed in this study where it is applied to a Buck-Boost converter. It is aimed to combine regular PI control and MPPT technique to enhance the generated power from photovoltaic PV) panels. The perturb and observe (P&O) technique is used as the MPPT control algorithm. The study proposes to reduce converter output oscillation owing to implemented MPPT control technique with additional PI observer. Furthermore aims to optimize output power using PI voltage mode closed-loop structure.",
"title": ""
},
{
"docid": "47b4b22cee9d5693c16be296afe61982",
"text": "In this work we introduce a fully end-to-end approach for action detection in videos that learns to directly predict the temporal bounds of actions. Our intuition is that the process of detecting actions is naturally one of observation and refinement: observing moments in video, and refining hypotheses about when an action is occurring. Based on this insight, we formulate our model as a recurrent neural network-based agent that interacts with a video over time. The agent observes video frames and decides both where to look next and when to emit a prediction. Since backpropagation is not adequate in this non-differentiable setting, we use REINFORCE to learn the agent's decision policy. Our model achieves state-of-the-art results on the THUMOS'14 and ActivityNet datasets while observing only a fraction (2% or less) of the video frames.",
"title": ""
},
{
"docid": "e33b3ebfc46c371253cf7f68adbbe074",
"text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.",
"title": ""
},
{
"docid": "d1f771fd1b0f8e5d91bbf65bc19aeb54",
"text": "Web-based systems are often a composition of infrastructure components, such as web servers and databases, and of applicationspecific code, such as HTML-embedded scripts and server-side applications. While the infrastructure components are usually developed by experienced programmers with solid security skills, the application-specific code is often developed under strict time constraints by programmers with little security training. As a result, vulnerable web-applications are deployed and made available to the Internet at large, creating easilyexploitable entry points for the compromise of entire networks. Web-based applications often rely on back-end database servers to manage application-specific persistent state. The data is usually extracted by performing queries that are assembled using input provided by the users of the applications. If user input is not sanitized correctly, it is possible to mount a variety of attacks that leverage web-based applications to compromise the security of back-end databases. Unfortunately, it is not always possible to identify these attacks using signature-based intrusion detection systems, because of the ad hoc nature of many web-based applications. Signatures are rarely written for this class of applications due to the substantial investment of time and expertise this would require. We have developed an anomaly-based system that learns the profiles of the normal database access performed by web-based applications using a number of different models. These models allow for the detection of unknown attacks with reduced false positives and limited overhead. In addition, our solution represents an improvement with respect to previous approaches because it reduces the possibility of executing SQL-based mimicry attacks.",
"title": ""
},
{
"docid": "505a9b6139e8cbf759652dc81f989de9",
"text": "SQL injection attacks, a class of injection flaw in which specially crafted input strings leads to illegal queries to databases, are one of the topmost threats to web applications. A Number of research prototypes and commercial products that maintain the queries structure in web applications have been developed. But these techniques either fail to address the full scope of the problem or have limitations. Based on our observation that the injected string in a SQL injection attack is interpreted differently on different databases. A characteristic diagnostic feature of SQL injection attacks is that they change the intended structure of queries issued. Pattern matching is a technique that can be used to identify or detect any anomaly packet from a sequential action. Injection attack is a method that can inject any kind of malicious string or anomaly string on the original string. Most of the pattern based techniques are used static analysis and patterns are generated from the attacked statements. In this paper, we proposed a detection and prevention technique for preventing SQL Injection Attack (SQLIA) using Aho–Corasick pattern matching algorithm. In this paper, we proposed an overview of the architecture. In the initial stage evaluation, we consider some sample of standard attack patterns and it shows that the proposed algorithm is works well against the SQL Injection Attack. Keywords—SQL Injection Attack; Pattern matching; Static Pattern; Dynamic Pattern",
"title": ""
},
{
"docid": "e1d635202eb482e49ff736fd37d161ac",
"text": "Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.",
"title": ""
},
{
"docid": "48036770f56e84df8b05c198e8a89018",
"text": "Advances in low power VLSI design, along with the potentially low duty cycle of wireless sensor nodes open up the possibility of powering small wireless computing devices from scavenged ambient power. A broad review of potential power scavenging technologies and conventional energy sources is first presented. Low-level vibrations occurring in common household and office environments as a potential power source are studied in depth. The goal of this paper is not to suggest that the conversion of vibrations is the best or most versatile method to scavenge ambient power, but to study its potential as a viable power source for applications where vibrations are present. Different conversion mechanisms are investigated and evaluated leading to specific optimized designs for both capacitive MicroElectroMechancial Systems (MEMS) and piezoelectric converters. Simulations show that the potential power density from piezoelectric conversion is significantly higher. Experiments using an off-the-shelf PZT piezoelectric bimorph verify the accuracy of the models for piezoelectric converters. A power density of 70 mW/cm has been demonstrated with the PZT bimorph. Simulations show that an optimized design would be capable of 250 mW/cm from a vibration source with an acceleration amplitude of 2.5 m/s at 120 Hz. q 2002 Elsevier Science B.V.. All rights reserved.",
"title": ""
},
{
"docid": "4acfb49be406de472af9080d3cdc6fa4",
"text": "Evolution provides a creative fount of complex and subtle adaptations that often surprise the scientists who discover them. However, the creativity of evolution is not limited to the natural world: artificial organisms evolving in computational environments have also elicited surprise and wonder from the researchers studying them. The process of evolution is an algorithmic process that transcends the substrate in which it occurs. Indeed, many researchers in the field of digital evolution can provide examples of how their evolving algorithms and organisms have creatively subverted their expectations or intentions, exposed unrecognized bugs in their code, produced unexpectedly adaptations, or engaged in behaviors and outcomes uncannily convergent with ones found in nature. Such stories routinely reveal surprise and creativity by evolution in these digital worlds, but they rarely fit into the standard scientific narrative. Instead they are often treated as mere obstacles to be overcome, rather than results that warrant study in their own right. Bugs are fixed, experiments are refocused, and one-off surprises are collapsed into a single data point. The stories themselves are traded among researchers through oral tradition, but that mode of information transmission is inefficient and prone to error and outright loss. Moreover, the fact that these stories tend to be shared only among practitioners means that many natural scientists do not realize how interesting and lifelike digital organisms are and how natural their evolution can be. To our knowledge, no collection of such anecdotes has been published before. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. In doing so we also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may indeed be a universal property of all complex evolving systems.",
"title": ""
},
{
"docid": "059b8861a00bb0246a07fa339b565079",
"text": "Recognizing facial action units (AUs) from spontaneous facial expressions is still a challenging problem. Most recently, CNNs have shown promise on facial AU recognition. However, the learned CNNs are often overfitted and do not generalize well to unseen subjects due to limited AU-coded training images. We proposed a novel Incremental Boosting CNN (IB-CNN) to integrate boosting into the CNN via an incremental boosting layer that selects discriminative neurons from the lower layer and is incrementally updated on successive mini-batches. In addition, a novel loss function that accounts for errors from both the incremental boosted classifier and individual weak classifiers was proposed to fine-tune the IB-CNN. Experimental results on four benchmark AU databases have demonstrated that the IB-CNN yields significant improvement over the traditional CNN and the boosting CNN without incremental learning, as well as outperforming the state-of-the-art CNN-based methods in AU recognition. The improvement is more impressive for the AUs that have the lowest frequencies in the databases.",
"title": ""
},
{
"docid": "17321e451d7441c8a434c637237370a2",
"text": "In recent years, there are increasing interests in using path identifiers (<inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula>) as inter-domain routing objects. However, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> used in existing approaches are static, which makes it easy for attackers to launch the distributed denial-of-service (DDoS) flooding attacks. To address this issue, in this paper, we present the design, implementation, and evaluation of dynamic PID (D-PID), a framework that uses <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> negotiated between the neighboring domains as inter-domain routing objects. In D-PID, the <inline-formula> <tex-math notation=\"LaTeX\">$\\it PID$ </tex-math></inline-formula> of an inter-domain path connecting the two domains is kept secret and changes dynamically. We describe in detail how neighboring domains negotiate <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> and how to maintain ongoing communications when <inline-formula> <tex-math notation=\"LaTeX\">$\\it PIDs$ </tex-math></inline-formula> change. We build a 42-node prototype comprised of six domains to verify D-PID’s feasibility and conduct extensive simulations to evaluate its effectiveness and cost. The results from both simulations and experiments show that D-PID can effectively prevent DDoS attacks.",
"title": ""
},
{
"docid": "0ba15705fcd12cb3efa17a6878c43606",
"text": "Voice has become an increasingly popular User Interaction (UI) channel, mainly contributing to the current trend of wearables, smart vehicles, and home automation systems. Voice assistants such as Alexa, Siri, and Google Now, have become our everyday fixtures, especially when/where touch interfaces are inconvenient or even dangerous to use, such as driving or exercising. The open nature of the voice channel makes voice assistants difficult to secure, and hence exposed to various threats as demonstrated by security researchers. To defend against these threats, we present VAuth, the first system that provides continuous authentication for voice assistants. VAuth is designed to fit in widely-adopted wearable devices, such as eyeglasses, earphones/buds and necklaces, where it collects the body-surface vibrations of the user and matches it with the speech signal received by the voice assistant's microphone. VAuth guarantees the voice assistant to execute only the commands that originate from the voice of the owner. We have evaluated VAuth with 18 users and 30 voice commands and find it to achieve 97% detection accuracy and less than 0.1% false positive rate, regardless of VAuth's position on the body and the user's language, accent or mobility. VAuth successfully thwarts various practical attacks, such as replay attacks, mangled voice attacks, or impersonation attacks. It also incurs low energy and latency overheads and is compatible with most voice assistants.",
"title": ""
},
{
"docid": "38715a7ba5efc87b47491d9ced8c8a31",
"text": "We propose a new method for fusing a LIDAR point cloud and camera-captured images in the deep convolutional neural network (CNN). The proposed method constructs a new layer called non-homogeneous pooling layer to transform features between bird view map and front view map. The sparse LIDAR point cloud is used to construct the mapping between the two maps. The pooling layer allows efficient fusion of the bird view and front view features at any stage of the network. This is favorable for the 3D-object detection using camera-LIDAR fusion in autonomous driving scenarios. A corresponding deep CNN is designed and tested on the KITTI[1] bird view object detection dataset, which produces 3D bounding boxes from the bird view map. The fusion method shows particular benefit for detection of pedestrians in the bird view compared to other fusion-based object detection networks.",
"title": ""
},
{
"docid": "2caf8a90640a98f3690785b6dd641e08",
"text": "This paper presents a simple, novel, yet very powerful approach for robust rotation-invariant texture classification based on random projection. The proposed sorted random projection maintains the strengths of random projection, in being computationally efficient and low-dimensional, with the addition of a straightforward sorting step to introduce rotation invariance. At the feature extraction stage, a small set of random measurements is extracted from sorted pixels or sorted pixel differences in local image patches. The rotation invariant random features are embedded into a bag-of-words model to perform texture classification, allowing us to achieve global rotation invariance. The proposed unconventional and novel random features are very robust, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We report extensive experiments comparing the proposed method to six state-of-the-art methods, RP, Patch, LBP, WMFS and the methods of Lazebnik et al. and Zhang et al., in texture classification on five databases: CUReT, Brodatz, UIUC, UMD and KTH-TIPS. Our approach leads to significant improvements in classification accuracy, producing consistently good results on each database, including what we believe to be the best reported results for Brodatz, UMD and KTH-TIPS. & 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "07153810148e93a0bc0b62a6de77594c",
"text": "Six healthy young male volunteers at a contract research organization were enrolled in the first phase 1 clinical trial of TGN1412, a novel superagonist anti-CD28 monoclonal antibody that directly stimulates T cells. Within 90 minutes after receiving a single intravenous dose of the drug, all six volunteers had a systemic inflammatory response characterized by a rapid induction of proinflammatory cytokines and accompanied by headache, myalgias, nausea, diarrhea, erythema, vasodilatation, and hypotension. Within 12 to 16 hours after infusion, they became critically ill, with pulmonary infiltrates and lung injury, renal failure, and disseminated intravascular coagulation. Severe and unexpected depletion of lymphocytes and monocytes occurred within 24 hours after infusion. All six patients were transferred to the care of the authors at an intensive care unit at a public hospital, where they received intensive cardiopulmonary support (including dialysis), high-dose methylprednisolone, and an anti-interleukin-2 receptor antagonist antibody. Prolonged cardiovascular shock and acute respiratory distress syndrome developed in two patients, who required intensive organ support for 8 and 16 days. Despite evidence of the multiple cytokine-release syndrome, all six patients survived. Documentation of the clinical course occurring over the 30 days after infusion offers insight into the systemic inflammatory response syndrome in the absence of contaminating pathogens, endotoxin, or underlying disease.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "41c5dbb3e903c007ba4b8f37d40b06ef",
"text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.",
"title": ""
},
{
"docid": "4e5d46d9bb7b9edbc4fc6a42b6314703",
"text": "Positive body image among adults is related to numerous indicators of well-being. However, no research has explored body appreciation among children. To facilitate our understanding of children’s positive body image, the current study adapts and validates the Body Appreciation Scale-2 (BAS-2; Tylka & WoodBarcalow, 2015a) for use with children. Three hundred and forty-four children (54.4% girls) aged 9–11 completed the adapted Body Appreciation Scale-2 for Children (BAS-2C) alongside measures of body esteem, media influence, body surveillance, mood, and dieting. A sub-sample of 154 participants (62.3% girls) completed the questionnaire 6-weeks later to examine stability (test-retest) reliability. The BAS-2C",
"title": ""
},
{
"docid": "35f8b54ee1fbf153cb483fc4639102a5",
"text": "This research studies the risk prediction of hospital readmissions using metaheuristic and data mining approaches. This is a critical issue in the U.S. healthcare system because a large percentage of preventable hospital readmissions derive from a low quality of care during patients’ stays in the hospital as well as poor arrangement of the discharge process. To reduce the number of hospital readmissions, the Centers for Medicare and Medicaid Services has launched a readmission penalty program in which hospitals receive reduced reimbursement for high readmission rates for Medicare beneficiaries. In the current practice, patient readmission risk is widely assessed by evaluating a LACE score including length of stay (L), acuity level of admission (A), comorbidity condition (C), and use of emergency rooms (E). However, the LACE threshold classifying highand low-risk readmitted patients is set up by clinic practitioners based on specific circumstances and experiences. This research proposed various data mining approaches to identify the risk group of a particular patient, including neural network model, random forest (RF) algorithm, and the hybrid model of swarm intelligence heuristic and support vector machine (SVM). The proposed neural network algorithm, the RF and the SVM classifiers are used to model patients’ characteristics, such as their ages, insurance payers, medication risks, etc. Experiments are conducted to compare the performance of the proposed models with previous research. Experimental results indicate that the proposed prediction SVM model with particle swarm parameter tuning outperforms other algorithms and achieves 78.4% on overall prediction accuracy, 97.3% on sensitivity. The high sensitivity shows its strength in correctly identifying readmitted patients. The outcome of this research will help reduce overall hospital readmission rates and allow hospitals to utilize their resources more efficiently to enhance interventions for high-risk patients. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "86e0c7b70de40fcd5179bf3ab67bc3a4",
"text": "The development of a scale to assess drug and other treatment effects on severely mentally retarded individuals was described. In the first stage of the project, an initial scale encompassing a large number of behavior problems was used to rate 418 residents. The scale was then reduced to an intermediate version, and in the second stage, 509 moderately to profoundly retarded individuals were rated. Separate factor analyses of the data from the two samples resulted in a five-factor scale comprising 58 items. The factors of the Aberrant Behavior Checklist have been labeled as follows: (I) Irritability, Agitation, Crying; (II) Lethargy, Social Withdrawal; (III) Stereotypic Behavior; (IV) Hyperactivity, Noncompliance; and (V) Inappropriate Speech. Average subscale scores were presented for the instrument, and the results were compared with empirically derived rating scales of childhood psychopathology and with factor analytic work in the field of mental retardation.",
"title": ""
}
] | scidocsrr |
a16f0041754899e1f6101f7b8a5d82a6 | Agile Software Development Methodologies and Practices | [
{
"docid": "2e9b2eccefe56b9cbf8d5793cc3f1cbb",
"text": "This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertisebased and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates.",
"title": ""
}
] | [
{
"docid": "19f4100f2e1d5655edca03a269adf79a",
"text": "OBJECTIVES\nTo assess the influence of conventional glass ionomer cement (GIC) vs resin-modified GIC (RMGIC) as a base material for novel, super-closed sandwich restorations (SCSR) and its effect on shrinkage-induced crack propensity and in vitro accelerated fatigue resistance.\n\n\nMETHODS\nA standardized MOD slottype tooth preparation was applied to 30 extracted maxillary molars (5 mm depth/5 mm buccolingual width). A modified sandwich restoration was used, in which the enamel/dentin bonding agent was applied first (Optibond FL, Kerr), followed by a Ketac Molar (3M ESPE)(group KM, n = 15) or Fuji II LC (GC) (group FJ, n = 15) base, leaving 2 mm for composite resin material (Miris 2, Coltène-Whaledent). Shrinkageinduced enamel cracks were tracked with photography and transillumination. Samples were loaded until fracture or to a maximum of 185,000 cycles under isometric chewing (5 H z), starting with a load of 200 N (5,000 X), followed by stages of 400, 600, 800, 1,000, 1,200, and 1,400 N at a maximum of 30,000 X each. Groups were compared using the life table survival analysis (α = .008, Bonferroni method).\n\n\nRESULTS\nGroup FJ showed the highest survival rate (40% intact specimens) but did not differ from group KM (20%) or traditional direct restorations (13%, previous data). SCSR generated less shrinkage-induced cracks. Most failures were re-restorable (above the cementoenamel junction [CEJ]).\n\n\nCONCLUSIONS\nInclusion of GIC/RMGIC bases under large direct SCSRs does not affect their fatigue strength but tends to decrease the shrinkage-induced crack propensity.\n\n\nCLINICAL SIGNIFICANCE\nThe use of GIC/ RMGIC bases and the SCSR is an easy way to minimize polymerization shrinkage stress in large MOD defects without weakening the restoration.",
"title": ""
},
{
"docid": "4cb25adf48328e1e9d871940a97fdff2",
"text": "This article is concerned with parameters identification problems and computer modeling of thrust generation subsystem for small unmanned aerial vehicles (UAV) quadrotor type. In this paper approach for computer model generation of dynamic process of thrust generation subsystem that consists of fixed pitch propeller, EC motor and power amplifier, is considered. Due to the fact that obtainment of aerodynamic characteristics of propeller via analytical approach is quite time-consuming, and taking into account that subsystem consists of as well as propeller, motor and power converter with microcontroller control system, which operating algorithm is not always available from manufacturer, receiving trusted computer model of thrust generation subsystem via analytical approach is impossible. Identification of the system under investigation is performed from the perspective of “black box” with the known qualitative description of proceeded there dynamic processes. For parameters identification of subsystem special laboratory rig that described in this paper was designed.",
"title": ""
},
{
"docid": "88804c0fb16e507007983108811950dc",
"text": "We propose a neural probabilistic structured-prediction method for transition-based natural language processing, which integrates beam search and contrastive learning. The method uses a global optimization model, which can leverage arbitrary features over nonlocal context. Beam search is used for efficient heuristic decoding, and contrastive learning is performed for adjusting the model according to search errors. When evaluated on both chunking and dependency parsing tasks, the proposed method achieves significant accuracy improvements over the locally normalized greedy baseline on the two tasks, respectively.",
"title": ""
},
{
"docid": "0513ce3971cb0e438598ea6766be19ff",
"text": "This paper proposes two interference mitigation strategies that adjust the maximum transmit power of femtocell users to suppress the cross-tier interference at a macrocell base station (BS). The open-loop and the closed-loop control suppress the cross-tier interference less than a fixed threshold and an adaptive threshold based on the noise and interference (NI) level at the macrocell BS, respectively. Simulation results show that both schemes effectively compensate the uplink throughput degradation of the macrocell BS due to the cross-tier interference and that the closed-loop control provides better femtocell throughput than the open-loop control at a minimal cost of macrocell throughput.",
"title": ""
},
{
"docid": "5e5e2d038ae29b4c79c79abe3d20ae40",
"text": "Article history: Received 28 February 2013 Accepted 26 July 2013 Available online 11 October 2013 Fault diagnosis of Discrete Event Systems has become an active research area in recent years. The research activity in this area is driven by the needs of many different application domains such as manufacturing, process control, control systems, transportation, communication networks, software engineering, and others. The aim of this paper is to review the state-of the art of methods and techniques for fault diagnosis of Discrete Event Systems based on models that include faulty behaviour. Theoretical and practical issues related to model description tools, diagnosis processing structure, sensor selection, fault representation and inference are discussed. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d3f43eef5e36eb7b078b010482bdb115",
"text": "This study is aimed at constructing a correlative model between Internet addiction and mobile phone addiction; the aim is to analyse the correlation (if any) between the two traits and to discuss the influence confirming that the gender has difference on this fascinating topic; taking gender into account opens a new world of scientific study to us. The study collected 448 college students on an island as study subjects, with 61.2% males and 38.8% females. Moreover, this study issued Mobile Phone Addiction Scale and Internet Addiction Scale to conduct surveys on the participants and adopts the structural equation model (SEM) to process the collected data. According to the study result, (1) mobile phone addiction and Internet addiction are positively related; (2) female college students score higher than male ones in the aspect of mobile addiction. Lastly, this study proposes relevant suggestions to serve as a reference for schools, college students, and future studies based on the study results.",
"title": ""
},
{
"docid": "a66b5b6dea68e5460b227af4caa14ef3",
"text": "This paper will discuss and compare event representations across a variety of types of event annotation: Rich Entities, Relations, and Events (Rich ERE), Light Entities, Relations, and Events (Light ERE), Event Nugget (EN), Event Argument Extraction (EAE), Richer Event Descriptions (RED), and Event-Event Relations (EER). Comparisons of event representations are presented, along with a comparison of data annotated according to each event representation. An event annotation experiment is also discussed, including annotation for all of these representations on the same set of sample data, with the purpose of being able to compare actual annotation across all of these approaches as directly as possible. We walk through a brief example to illustrate the various annotation approaches, and to show the intersections among the various annotated data sets.",
"title": ""
},
{
"docid": "37d3bf208ee4e513a809fa94f93a2654",
"text": "Unplanned use of fertilizers leads to inferior quality of crops. Excess of one nutrient can make it difficult for the plant to absorb the other nutrients. To deal with this problem, the quality of soil is tested using a PH sensor that indicates the percentage of macronutrients present in the soil. Conventional methods used to test soil quality, involve the use of Ion Selective Field Effect Transistors (ISFET), Ion Selective Electrode (ISE) and Optical Sensors as the sensing units which were found to be very expensive. The prototype design will allow sprinkling of fertilizers to take place in zones which are deficient in these macronutrients (Nitrogen, Phosphorous and Potassium), proving it to be a cost efficient and farmer-friendly automated fertilization unit. Cost of the proposed unit is found to be one-seventh of that of the present methods, making it affordable for farmers and also saves the manual labor. Initial analysis and intensive case studies conducted in farmland situated near Ambedkar Nagar, Sarjapur also revealed the use of above mechanism to be more prominent and verified through practical implementation and experimentation as it takes lesser time to analyze the nutrient content than the other methods which require soil testing. Sprinklers cover discrete zones in the field that automate fertilization and reduce the effort of farmers in the rural areas. This novel technique also has a fast response time as it enables real time, in-situ soil nutrient analysis, thereby maintaining proper soil pH level required for a particular crop, reducing potentially negative environmental impacts.",
"title": ""
},
{
"docid": "20cbfe9c1d20bfd67bbcbf39641aa69a",
"text": "The CIPS-SIGHAN CLP 2010 Chinese Word Segmentation Bakeoff was held in the summer of 2010 to evaluate the current state of the art in word segmentation. It focused on the crossdomain performance of Chinese word segmentation algorithms. Eighteen groups submitted 128 results over two tracks (open training and closed training), four domains (literature, computer science, medicine and finance) and two subtasks (simplified Chinese and traditional Chinese). We found that compared with the previous Chinese word segmentation bakeoffs, the performance of cross-domain Chinese word segmentation is not much lower, and the out-of-vocabulary recall is improved.",
"title": ""
},
{
"docid": "080032ded41edee2a26320e3b2afb123",
"text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.",
"title": ""
},
{
"docid": "af11d259a031d22f7ee595ee2a250136",
"text": "Cellular networks today are designed for and operate in dedicated licensed spectrum. At the same time there are other spectrum usage authorization models for wireless communication, such as unlicensed spectrum or, as widely discussed currently but not yet implemented in practice, various forms of licensed shared spectrum. Hence, cellular technology as of today can only operate in a subset of the spectrum that is in principle available. Hence, a future wireless system may benefit from the ability to access also spectrum opportunities other than dedicated licensed spectrum. It is therefore important to identify which additional ways of authorizing spectrum usage are deemed to become relevant in the future and to analyze the resulting technical requirements. The implications of sharing spectrum between different technologies are analyzed in this paper, both from efficiency and technology neutrality perspective. Different known sharing techniques are outlined and their applicability to the relevant range of future spectrum regulatory regimes is discussed. Based on an assumed range of relevant (according to the views of the authors) future spectrum sharing scenarios, a toolbox of certain spectrum sharing techniques is proposed as the basis for the design of spectrum sharing related functionality in future mobile broadband systems.",
"title": ""
},
{
"docid": "10d41334c88039e9d85ce6eb93cb9abf",
"text": "nonlinear functional analysis and its applications iii variational methods and optimization PDF remote sensing second edition models and methods for image processing PDF remote sensing third edition models and methods for image processing PDF guide to signals and patterns in image processing foundations methods and applications PDF introduction to image processing and analysis PDF principles of digital image processing advanced methods undergraduate topics in computer science PDF image processing analysis and machine vision PDF image acquisition and processing with labview image processing series PDF wavelet transform techniques for image resolution PDF sparse image and signal processing wavelets and related geometric multiscale analysis PDF nonstandard methods in stochastic analysis and mathematical physics dover books on mathematics PDF solution manual wavelet tour of signal processing PDF remote sensing image fusion signal and image processing of earth observations PDF image understanding using sparse representations synthesis lectures on image video and multimedia processing PDF",
"title": ""
},
{
"docid": "d763947e969ade3c54c18f0b792a0f7b",
"text": "Recent results in compressive sampling have shown that sparse signals can be recovered from a small number of random measurements. This property raises the question of whether random measurements can provide an efficient representation of sparse signals in an information-theoretic sense. Through both theoretical and experimental results, we show that encoding a sparse signal through simple scalar quantization of random measurements incurs a significant penalty relative to direct or adaptive encoding of the sparse signal. Information theory provides alternative quantization strategies, but they come at the cost of much greater estimation complexity.",
"title": ""
},
{
"docid": "bc6cbf7da118c01d74914d58a71157ac",
"text": "Currently, there are increasing interests in text-to-speech (TTS) synthesis to use sequence-to-sequence models with attention. These models are end-to-end meaning that they learn both co-articulation and duration properties directly from text and speech. Since these models are entirely data-driven, they need large amounts of data to generate synthetic speech with good quality. However, in challenging speaking styles, such as Lombard speech, it is difficult to record sufficiently large speech corpora. Therefore, in this study we propose a transfer learning method to adapt a sequence-to-sequence based TTS system of normal speaking style to Lombard style. Moreover, we experiment with a WaveNet vocoder in synthesis of Lombard speech. We conducted subjective evaluations to assess the performance of the adapted TTS systems. The subjective evaluation results indicated that an adaptation system with the WaveNet vocoder clearly outperformed the conventional deep neural network based TTS system in synthesis of Lombard speech.",
"title": ""
},
{
"docid": "3a2729b235884bddc05dbdcb6a1c8fc9",
"text": "The people of Tumaco-La Tolita culture inhabited the borders of present-day Colombia and Ecuador. Already extinct by the time of the Spaniards arrival, they left a huge collection of pottery artifacts depicting everyday life; among these, disease representations were frequently crafted. In this article, we present the results of the personal examination of the largest collections of Tumaco-La Tolita pottery in Colombia and Ecuador; cases of Down syndrome, achondroplasia, mucopolysaccharidosis I H, mucopolysaccharidosis IV, a tumor of the face and a benign tumor in an old woman were found. We believe these to be among the earliest artistic representations of disease.",
"title": ""
},
{
"docid": "950a6a611f1ceceeec49534c939b4e0f",
"text": "Often signals and system parameters are most conveniently represented as complex-valued vectors. This occurs, for example, in array processing [1], as well as in communication systems [7] when processing narrowband signals using the equivalent complex baseband representation [2]. Furthermore, in many important applications one attempts to optimize a scalar real-valued measure of performance over the complex parameters defining the signal or system of interest. This is the case, for example, in LMS adaptive filtering where complex filter coefficients are adapted on line. To effect this adaption one attempts to optimize the performance measure by adjustments of the coefficients along its gradient direction [16, 23].",
"title": ""
},
{
"docid": "a3ac978e59bdedc18c45d460dd8fc154",
"text": "Searching for information in distributed ledgers is currently not an easy task, as information relating to an entity may be scattered throughout the ledger with no index. As distributed ledger technologies become more established, they will increasingly be used to represent real world transactions involving many parties and the search requirements will grow. An index providing the ability to search using domain specific terms across multiple ledgers will greatly enhance to power, usability and scope of these systems. We have implemented a semantic index to the Ethereum blockchain platform, to expose distributed ledger data as Linked Data. As well as indexing blockand transactionlevel data according to the BLONDiE ontology, we have mapped smart contracts to the Minimal Service Model ontology, to take the first steps towards connecting smart contracts with Semantic Web Services.",
"title": ""
},
{
"docid": "0feae39f7e557a65699f686d14f4cf0f",
"text": "This paper describes the design of a multi-gigabit fiber-optic receiver with integrated large-area photo detectors for plastic optical fiber applications. An integrated 250 μm diameter non-SML NW/P-sub photo detector is adopted to allow efficient light coupling. The theory of applying a fully-differential pre-amplifier with a single-ended photo current is also examined and a super-Gm transimpedance amplifier has been proposed to drive a C PD of 14 pF to multi-gigahertz frequency. Both differential and common-mode operations of the proposed super-Gm transimpedance amplifier have been analyzed and a differential noise analysis is performed. A digitally-controlled linear equalizer is proposed to produce a slow-rising-slope frequency response to compensate for the photo detector up to 3 GHz. The proposed POF receiver consists of an illuminated signal photo detector, a shielded dummy photo detector, a super-Gm transimpedance amplifier, a variable-gain amplifier, a linear equalizer, a post amplifier, and an output driver. A test chip is fabricated in TSMC's 65 nm low-power CMOS process, and it consumes 50 mW of DC power (excluding the output driver) from a single 1.2 V supply. A bit-error rate of less than 10-12 has been measured at a data rate of 3.125 Gbps with a 670 nm VCSEL-based electro-optical transmitter.",
"title": ""
},
{
"docid": "5b6d68984b4f9a6e0f94e0a68768dc8c",
"text": "In this paper, we focus on a major internet problem which is a huge amount of uncategorized text. We review existing techniques used for feature selection and categorization. After reviewing the existing literature, it was found that there exist some gaps in existing algorithms, one of which is a requirement of the labeled dataset for the training of the classifier. Keywords— Bayesian; KNN; PCA; SVM; TF-IDF",
"title": ""
},
{
"docid": "6459493643eb7ff011fa0d8873382911",
"text": "This paper is about the effectiveness of qualitative easing; a government policy that is designed to mitigate risk through central bank purchases of privately held risky assets and their replacement by government debt, with a return that is guaranteed by the taxpayer. Policies of this kind have recently been carried out by national central banks, backed by implicit guarantees from national treasuries. I construct a general equilibrium model where agents have rational expectations and there is a complete set of financial securities, but where agents are unable to participate in financial markets that open before they are born. I show that a change in the asset composition of the central bank’s balance sheet will change equilibrium asset prices. Further, I prove that a policy in which the central bank stabilizes fluctuations in the stock market is Pareto improving and is costless to implement.",
"title": ""
}
] | scidocsrr |
5696d4593a6c514e4916dab560dc94f5 | Chapter LVIII The Design , Play , and Experience Framework | [
{
"docid": "ecddd4f80f417dcec49021065394c89a",
"text": "Research in the area of educational technology has often been critiqued for a lack of theoretical grounding. In this article we propose a conceptual framework for educational technology by building on Shulman’s formulation of ‘‘pedagogical content knowledge’’ and extend it to the phenomenon of teachers integrating technology into their pedagogy. This framework is the result of 5 years of work on a program of research focused on teacher professional development and faculty development in higher education. It attempts to capture some of the essential qualities of teacher knowledge required for technology integration in teaching, while addressing the complex, multifaceted, and situated nature of this knowledge. We argue, briefly, that thoughtful pedagogical uses of technology require the development of a complex, situated form of knowledge that we call Technological Pedagogical Content Knowledge (TPCK). In doing so, we posit the complex roles of, and interplay among, three main components of learning environments: content, pedagogy, and technology. We argue that this model has much to offer to discussions of technology integration at multiple levels: theoretical, pedagogical, and methodological. In this article, we describe the theory behind our framework, provide examples of our teaching approach based upon the framework, and illustrate the methodological contributions that have resulted from this work.",
"title": ""
},
{
"docid": "e5a3119470420024b99df2d6eb14b966",
"text": "Why should wait for some days to get or receive the rules of play game design fundamentals book that you order? Why should you take it if you can get the faster one? You can find the same book that you order right here. This is it the book that you can receive directly after purchasing. This rules of play game design fundamentals is well known book in the world, of course many people will try to own it. Why don't you become the first? Still confused with the way?",
"title": ""
}
] | [
{
"docid": "e737c117cd6e7083cd50069b70d236cb",
"text": "In this article we discuss a data structure, which combines advantages of two different ways for representing graphs: adjacency matrix and collection of adjacency lists. This data structure can fast add and search edges (advantages of adjacency matrix), use linear amount of memory, let to obtain adjacency list for certain vertex (advantages of collection of adjacency lists). Basic knowledge of linked lists and hash tables is required to understand this article. The article contains examples of implementation on Java.",
"title": ""
},
{
"docid": "9dcee1244dd71174b15df9cfaba2ebdf",
"text": "In this paper, we investigate the dynamical behaviors of a Morris–Lecar neuron model. By using bifurcation methods and numerical simulations, we examine the global structure of bifurcations of the model. Results are summarized in various two-parameter bifurcation diagrams with the stimulating current as the abscissa and the other parameter as the ordinate. We also give the one-parameter bifurcation diagrams and pay much attention to the emergence of periodic solutions and bistability. Different membrane excitability is obtained by bifurcation analysis and frequency-current curves. The alteration of the membrane properties of the Morris–Lecar neurons is discussed.",
"title": ""
},
{
"docid": "39861e2759b709883f3d37a65d13834b",
"text": "BACKGROUND\nDeveloping countries account for 99 percent of maternal deaths annually. While increasing service availability and maintaining acceptable quality standards, it is important to assess maternal satisfaction with care in order to make it more responsive and culturally acceptable, ultimately leading to enhanced utilization and improved outcomes. At a time when global efforts to reduce maternal mortality have been stepped up, maternal satisfaction and its determinants also need to be addressed by developing country governments. This review seeks to identify determinants of women's satisfaction with maternity care in developing countries.\n\n\nMETHODS\nThe review followed the methodology of systematic reviews. Public health and social science databases were searched. English articles covering antenatal, intrapartum or postpartum care, for either home or institutional deliveries, reporting maternal satisfaction from developing countries (World Bank list) were included, with no year limit. Out of 154 shortlisted abstracts, 54 were included and 100 excluded. Studies were extracted onto structured formats and analyzed using the narrative synthesis approach.\n\n\nRESULTS\nDeterminants of maternal satisfaction covered all dimensions of care across structure, process and outcome. Structural elements included good physical environment, cleanliness, and availability of adequate human resources, medicines and supplies. Process determinants included interpersonal behavior, privacy, promptness, cognitive care, perceived provider competency and emotional support. Outcome related determinants were health status of the mother and newborn. Access, cost, socio-economic status and reproductive history also influenced perceived maternal satisfaction. Process of care dominated the determinants of maternal satisfaction in developing countries. Interpersonal behavior was the most widely reported determinant, with the largest body of evidence generated around provider behavior in terms of courtesy and non-abuse. Other aspects of interpersonal behavior included therapeutic communication, staff confidence and competence and encouragement to laboring women.\n\n\nCONCLUSIONS\nQuality improvement efforts in developing countries could focus on strengthening the process of care. Special attention is needed to improve interpersonal behavior, as evidence from the review points to the importance women attach to being treated respectfully, irrespective of socio-cultural or economic context. Further research on maternal satisfaction is required on home deliveries and relative strength of various determinants in influencing maternal satisfaction.",
"title": ""
},
{
"docid": "1fe0a9895bca5646908efc86e019f5d3",
"text": "The purpose of this study was to examine how violence from patients and visitors is related to emergency department (ED) nurses' work productivity and symptoms of post-traumatic stress disorder (PTSD). Researchers have found ED nurses experience a high prevalence of physical assaults from patients and visitors. Yet, there is little research which examines the effect violent events have on nurses' productivity, particularly their ability to provide safe and compassionate patient care. A cross-sectional design was used to gather data from ED nurses who are members of the Emergency Nurses Association in the United States. Participants were asked to complete the Impact of Events Scale-Revised and Healthcare Productivity Survey in relation to a stressful violent event. Ninety-four percent of nurses experienced at least one posttraumatic stress disorder symptom after a violent event, with 17% having scores high enough to be considered probable for PTSD. In addition, there were significant indirect relationships between stress symptoms and work productivity. Workplace violence is a significant stressor for ED nurses. Results also indicate violence has an impact on the care ED nurses provide. Interventions are needed to prevent the violence and to provide care to the ED nurse after an event.",
"title": ""
},
{
"docid": "3e6e72747036ca7255b449f4c93e15f7",
"text": "In this paper a planar antenna is studied for ultrawide-band (UWB) applications. This antenna consists of a wide-band tapered-slot feeding structure, curved radiators and a parasitic element. It is a modification of the conventional dual exponential tapered slot antenna and can be viewed as a printed dipole antenna with tapered slot feed. The design guideline is introduced, and the antenna parameters including return loss, radiation patterns and gain are investigated. To demonstrate the applicability of the proposed antenna to UWB applications, the transfer functions of a transmitting-receiving system with a pair of identical antennas are measured. Transient waveforms as the transmitting-receiving system being excited by a simulated pulse are discussed at the end of this paper.",
"title": ""
},
{
"docid": "7cb6582bf81aea75818eef2637c95c79",
"text": "Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results.",
"title": ""
},
{
"docid": "e4183c85a9f6771fa06316b002e13188",
"text": "This paper provides an analysis of some argumentation in a biomedical genetics research article as a step towards developing a corpus of articles annotated to support research on argumentation. We present a specification of several argumentation schemes and inter-argument relationships to be annotated.",
"title": ""
},
{
"docid": "b515eb759984047f46f9a0c27b106f47",
"text": "Visual motion estimation is challenging, due to high data rates, fast camera motions, featureless or repetitive environments, uneven lighting, and many other issues. In this work, we propose a twolayer approach for visual odometry with stereo cameras, which runs in real-time and combines feature-based matching with semi-dense direct image alignment. Our method initializes semi-dense depth estimation, which is computationally expensive, from motion that is tracked by a fast but robust feature point-based method. By that, we are not only able to efficiently estimate the pose of the camera with a high frame rate, but also to reconstruct the 3D structure of the environment at image gradients, which is useful, e.g., for mapping and obstacle avoidance. Experiments on datasets captured by a micro aerial vehicle (MAV) show that our approach is faster than state-of-the-art methods without losing accuracy. Moreover, our combined approach achieves promising results on the KITTI dataset, which is very challenging for direct methods, because of the low frame rate in conjunction with fast motion.",
"title": ""
},
{
"docid": "a743ac1f5b37c35bb78cf7efc3d3a3c8",
"text": "Concepts concerning mediation in the causal inference literature are reviewed. Notions of direct and indirect effects from a counterfactual approach to mediation are compared with those arising from the standard regression approach to mediation of Baron and Kenny (1986), commonly utilized in the social science literature. It is shown that concepts of direct and indirect effect from causal inference generalize those described by Baron and Kenny and that under appropriate identification assumptions these more general direct and indirect effects from causal inference can be estimated using regression even when there are interactions between the primary exposure of interest and the mediator. A number of conceptual issues are discussed concerning the interpretation of identification conditions for mediation, the notion of counterfactuals based on hypothetical interventions and the so called consistency and composition assumptions.",
"title": ""
},
{
"docid": "55610ac91c3abb52e3bbd95c289b9b95",
"text": "A robot finger is developed for five-fingered robot hand having equal number of DOF to human hand. The robot hand is driven by a new method proposed by authors using ultrasonic motors and elastic elements. The method utilizes restoring force of elastic element as driving power for grasping an object, so that the hand can perform the soft and stable grasping motion with no power supply. In addition, all the components are placed inside the hand thanks to the ultrasonic motors with compact size and high torque at low speed. Applying the driving method to multi-DOF mechanism, a robot index finger is designed and implemented. It has equal number of joints and DOF to human index finger, and it is also equal in size to the finger of average adult male. The performance of the robot finger is confirmed by fundamental driving test.",
"title": ""
},
{
"docid": "413c4d1115e8042cce44308583649279",
"text": "With the growing popularity of microblogging services such as Twitter in recent years, an increasing number of users are using these services in their daily lives. The huge volume of information generated by users raises new opportunities in various applications and areas. Inferring user interests plays a significant role in providing personalized recommendations on microblogging services, and also on third-party applications providing social logins via these services, especially in cold-start situations. In this survey, we review user modeling strategies with respect to inferring user interests from previous studies. To this end, we focus on four dimensions of inferring user interest profiles: (1) data collection, (2) representation of user interest profiles, (3) construction and enhancement of user interest profiles, and (4) the evaluation of the constructed profiles. Through this survey, we aim to provide an overview of state-of-the-art user modeling strategies for inferring user interest profiles on microblogging social networks with respect to the four dimensions. For each dimension, we review and summarize previous studies based on specified criteria. Finally, we discuss some challenges and opportunities for future work in this research domain.",
"title": ""
},
{
"docid": "9ffb34f554e9d31938b77a33be187014",
"text": "Job recommendation systems mainly use different sources of data in order to give the better content for the end user. Developing the well-performing system requires complex hybrid approaches of representing similarity based on the content of job postings and resumes as well as interactions between them. We develop an efficient hybrid networkbased job recommendation system which uses Personalized PageRank algorithm in order to rank vacancies for the users based on the similarity between resumes and job posts as textual documents, along with previous interactions of users with vacancies. Our approach achieved the recall of 50% and generated more applies for the jobs during the online A/B test than previous algorithms.",
"title": ""
},
{
"docid": "a9b620269c6448facfe0ae8e034f41fa",
"text": "The aim of this project is to make progress towards building a machine learning agent that understands natural language and can perform basic reasoning. Towards this nebulous goal, we focus on question answering: Can an agent answer a query based on a given set of natural language facts? We combine LSTM sentence embedding models with an attention mechanism and obtain good results on the Facebook bAbI dataset [1], outperforming [2] on 1 task and achieving similar performance on several others.",
"title": ""
},
{
"docid": "507a60e62e9d2086481e7a306d012e52",
"text": "Health monitoring systems have rapidly evolved recently, and smart systems have been proposed to monitor patient current health conditions, in our proposed and implemented system, we focus on monitoring the patient's blood pressure, and his body temperature. Based on last decade statistics of medical records, death rates due to hypertensive heart disease, shows that the blood pressure is a crucial risk factor for atherosclerosis and ischemic heart diseases; thus, preventive measures should be taken against high blood pressure which provide the ability to track, trace and save patient's life at appropriate time is an essential need for mankind. Nowadays, Globalization demands Smart cities, which involves many attributes and services, such as government services, Intelligent Transportation Systems (ITS), energy, health care, water and waste. This paper proposes a system architecture for smart healthcare based on GSM and GPS technologies. The objective of this work is providing an effective application for Real Time Health Monitoring and Tracking. The system will track, trace, monitor patients and facilitate taking care of their health; so efficient medical services could be provided at appropriate time. By Using specific sensors, the data will be captured and compared with a configurable threshold via microcontroller which is defined by a specialized doctor who follows the patient; in any case of emergency a short message service (SMS) will be sent to the Doctor's mobile number along with the measured values through GSM module. furthermore, the GPS provides the position information of the monitored person who is under surveillance all the time. Moreover, the paper demonstrates the feasibility of realizing a complete end-to-end smart health system responding to the real health system design requirements by taking in consideration wider vital human health parameters such as respiration rate, nerves signs ... etc. The system will be able to bridge the gap between patients - in dramatic health change occasions- and health entities who response and take actions in real time fashion.",
"title": ""
},
{
"docid": "e1d9ff28da38fcf8ea3a428e7990af25",
"text": "The Autonomous car is a complex topic, different technical fields like: Automotive engineering, Control engineering, Informatics, Artificial Intelligence etc. are involved in solving the human driver replacement with an artificial (agent) driver. The problem is even more complicated because usually, nowadays, having and driving a car defines our lifestyle. This means that the mentioned (major) transformation is also a cultural issue. The paper will start with the mentioned cultural aspects related to a self-driving car and will continue with the big picture of the system.",
"title": ""
},
{
"docid": "7ae332505306f94f8f2b4e3903188126",
"text": "Clustering Web services would greatly boost the ability of Web service search engine to retrieve relevant services. The performance of traditional Web service description language (WSDL)-based Web service clustering is not satisfied, due to the singleness of data source. Recently, Web service search engines such as Seekda! allow users to manually annotate Web services using tags, which describe functions of Web services or provide additional contextual and semantical information. In this paper, we cluster Web services by utilizing both WSDL documents and tags. To handle the clustering performance limitation caused by uneven tag distribution and noisy tags, we propose a hybrid Web service tag recommendation strategy, named WSTRec, which employs tag co-occurrence, tag mining, and semantic relevance measurement for tag recommendation. Extensive experiments are conducted based on our real-world dataset, which consists of 15,968 Web services. The experimental results demonstrate the effectiveness of our proposed service clustering and tag recommendation strategies. Specifically, compared with traditional WSDL-based Web service clustering approaches, the proposed approach produces gains in both precision and recall for up to 14 % in most cases.",
"title": ""
},
{
"docid": "acb0f1e123cb686b4aeab418f380bd79",
"text": "Surface parameterization is necessary for many graphics tasks: texture-preserving simplification, remeshing, surface painting, and precomputation of solid textures. The stretch caused by a given parameterization determines the sampling rate on the surface. In this article, we present an automatic parameterization method for segmenting a surface into patches that are then flattened with little stretch.\n Many objects consist of regions of relatively simple shapes, each of which has a natural parameterization. Based on this observation, we describe a three-stage feature-based patch creation method for manifold surfaces. The first two stages, genus reduction and feature identification, are performed with the help of distance-based surface functions. In the last stage, we create one or two patches for each feature region based on a covariance matrix of the feature's surface points.\n To reduce stretch during patch unfolding, we notice that stretch is a 2 × 2 tensor, which in ideal situations is the identity. Therefore, we use the <i>Green-Lagrange tensor</i> to measure and to guide the optimization process. Furthermore, we allow the boundary vertices of a patch to be optimized by adding <i>scaffold triangles</i>. We demonstrate our feature-based patch creation and patch unfolding methods for several textured models.\n Finally, to evaluate the quality of a given parameterization, we describe an image-based error measure that takes into account stretch, seams, smoothness, packing efficiency, and surface visibility.",
"title": ""
},
{
"docid": "9eabe9a867edbceee72bd20d483ad886",
"text": "Inspired by recent advances of deep learning in instance segmentation and object tracking, we introduce the concept of convnet-based guidance applied to video object segmentation. Our model proceeds on a per-frame basis, guided by the output of the previous frame towards the object of interest in the next frame. We demonstrate that highly accurate object segmentation in videos can be enabled by using a convolutional neural network (convnet) trained with static images only. The key component of our approach is a combination of offline and online learning strategies, where the former produces a refined mask from the previous frame estimate and the latter allows to capture the appearance of the specific object instance. Our method can handle different types of input annotations such as bounding boxes and segments while leveraging an arbitrary amount of annotated frames. Therefore our system is suitable for diverse applications with different requirements in terms of accuracy and efficiency. In our extensive evaluation, we obtain competitive results on three different datasets, independently from the type of input annotation.",
"title": ""
},
{
"docid": "a0a13e7e5ce06e5cc28a2b23ea64c8f5",
"text": "The efficacy study was performed to prove the equivalent efficacy of dexibuprofen compared to the double dose of racemic ibuprofen and to show a clinical dose-response relationship of dexibuprofen. The 1-year tolerability study was carried out to investigate the tolerability of dexibuprofen. In the efficacy study 178 inpatients with osteoarthritis of the hip were assigned to 600 or 1200 mg of dexibuprofen or 2400 mg of racemic ibuprofen daily. The primary end-point was the improvement of the WOMAC OA index. A 1-year open tolerability study included 223 outpatients pooled from six studies. The main parameter was the incidence of clinical adverse events. In the efficacy study the evaluation of the improvement of the WOMAC OA index showed equivalence of dexibuprofen 400 mg t.i.d. compared to racemic ibuprofen 800 mg t.i.d., with dexibuprofen being borderline superior (P = 0.055). The comparison between the 400 mg t.i.d. and 200 mg t.i.d. doses confirmed a significant superior efficacy of dexibuprofen 400 mg (P = 0.023). In the tolerability study the overall incidence of clinical adverse events was 15.2% (GI tract 11.7%, CNS 1.3%, skin 1.3%, others 0.9%). The active enantiomer dexibuprofen proved to be an effective NSAID with a significant dose-response relationship. Compared to the double dose of racemic ibuprofen, dexibuprofen was at least equally efficient, with borderline superiority over dexibuprofen (P = 0.055). The tolerability study in 223 patients on dexibuprofen showed an incidence of clinical adverse events of 15.2% after 12 months. The results of the studies suggest that dexibuprofen is an effective NSAID with good tolerability.",
"title": ""
},
{
"docid": "ab662b1dd07a7ae868f70784408e1ce1",
"text": "We use autoencoders to create low-dimensional embeddings of underlying patient phenotypes that we hypothesize are a governing factor in determining how different patients will react to different interventions. We compare the performance of autoencoders that take fixed length sequences of concatenated timesteps as input with a recurrent sequence-to-sequence autoencoder. We evaluate our methods on around 35,500 patients from the latest MIMIC III dataset from Beth Israel Deaconess Hospital.",
"title": ""
}
] | scidocsrr |
39617ab96f7fadab45c84dec7c02a77e | A Self-Powered Insole for Human Motion Recognition | [
{
"docid": "8e02a76799f72d86e7240384bea563fd",
"text": "We have developed the suspended-load backpack, which converts mechanical energy from the vertical movement of carried loads (weighing 20 to 38 kilograms) to electricity during normal walking [generating up to 7.4 watts, or a 300-fold increase over previous shoe devices (20 milliwatts)]. Unexpectedly, little extra metabolic energy (as compared to that expended carrying a rigid backpack) is required during electricity generation. This is probably due to a compensatory change in gait or loading regime, which reduces the metabolic power required for walking. This electricity generation can help give field scientists, explorers, and disaster-relief workers freedom from the heavy weight of replacement batteries and thereby extend their ability to operate in remote areas.",
"title": ""
}
] | [
{
"docid": "f4401e483c519e1f2d33ee18ea23b8d7",
"text": "Cultivation of mindfulness, the nonjudgmental awareness of experiences in the present moment, produces beneficial effects on well-being and ameliorates psychiatric and stress-related symptoms. Mindfulness meditation has therefore increasingly been incorporated into psychotherapeutic interventions. Although the number of publications in the field has sharply increased over the last two decades, there is a paucity of theoretical reviews that integrate the existing literature into a comprehensive theoretical framework. In this article, we explore several components through which mindfulness meditation exerts its effects: (a) attention regulation, (b) body awareness, (c) emotion regulation (including reappraisal and exposure, extinction, and reconsolidation), and (d) change in perspective on the self. Recent empirical research, including practitioners' self-reports and experimental data, provides evidence supporting these mechanisms. Functional and structural neuroimaging studies have begun to explore the neuroscientific processes underlying these components. Evidence suggests that mindfulness practice is associated with neuroplastic changes in the anterior cingulate cortex, insula, temporo-parietal junction, fronto-limbic network, and default mode network structures. The authors suggest that the mechanisms described here work synergistically, establishing a process of enhanced self-regulation. Differentiating between these components seems useful to guide future basic research and to specifically target areas of development in the treatment of psychological disorders.",
"title": ""
},
{
"docid": "f052fae696370910cc59f48552ddd889",
"text": "Decisions involve many intangibles that need to be traded off. To do that, they have to be measured along side tangibles whose measurements must also be evaluated as to, how well, they serve the objectives of the decision maker. The Analytic Hierarchy Process (AHP) is a theory of measurement through pairwise comparisons and relies on the judgements of experts to derive priority scales. It is these scales that measure intangibles in relative terms. The comparisons are made using a scale of absolute judgements that represents, how much more, one element dominates another with respect to a given attribute. The judgements may be inconsistent, and how to measure inconsistency and improve the judgements, when possible to obtain better consistency is a concern of the AHP. The derived priority scales are synthesised by multiplying them by the priority of their parent nodes and adding for all such nodes. An illustration is included.",
"title": ""
},
{
"docid": "cce2d168e49620ead88953617cce52b0",
"text": "We analyze state-of-the-art deep learning models for three tasks: question answering on (1) images, (2) tables, and (3) passages of text. Using the notion of attribution (word importance), we find that these deep networks often ignore important question terms. Leveraging such behavior, we perturb questions to craft a variety of adversarial examples. Our strongest attacks drop the accuracy of a visual question answering model from 61.1% to 19%, and that of a tabular question answering model from 33.5% to 3.3%. Additionally, we show how attributions can strengthen attacks proposed by Jia and Liang (2017) on paragraph comprehension models. Our results demonstrate that attributions can augment standard measures of accuracy and empower investigation of model performance. When a model is accurate but for the wrong reasons, attributions can surface erroneous logic in the model that indicates inadequacies in the test data.",
"title": ""
},
{
"docid": "85f2e049dc90bf08ecb0d34899d8b3c5",
"text": "Here is little doubt that the Internet represents the spearhead of the industrial revolution. I love new technologies and gadgets that promise new and better ways of doing things. I have many such gadgets myself and I even manage to use a few of them (though not without some pain).A new piece of technology is like a new relationship, fun and exciting at first, but eventually it requires some hard work to maintain, usually in the form of time and energy. I doubt technology’s promise to improve the quality of life and I am still surprised how time-distorting and dissociating the computer and the Internet can be for me, along with the thousands of people I’ve interviewed, studied and treated in my clinical practice. It seems clear that the Internet can be used and abused in a compulsive fashion, and that there are numerous psychological factors that contribute to the Internet’s power and appeal. It appears that the very same features that drive the potency of the Net are potentially habit-forming. This study examined the self-reported Internet behavior of nearly 18,000 people who answered a survey on the ABCNEWS.com web site. Results clearly support the psychoactive nature of the Internet, and the potential for compulsive use and abuse of the Internet for certain individuals. Introduction Technology, and most especially, computers and the Internet, seem to be at best easily overused/abused, and at worst, addictive. The combination of available stimulating content, ease of access, convenience, low cost, visual stimulation, autonomy, and anonymity—all contribute to a highly psychoactive experience. By psychoactive, that is Running Head: Virtual Addiction to say mood altering, and potentially behaviorally impacting. In other words these technologies affect the manner in which we live and love. It is my contention that some of these effects are indeed less than positive, and may contribute to various negative psychological effects. The Internet and other digital technologies are only the latest in a series of “improvements” to our world which may have unintended negative effects. The experience of problems with new and unknown technologies is far from new; we have seen countless examples of newer and better things that have had unintended and unexpected deleterious effects. Remember Thalidomide, PVC/PCB’s, Atomic power, fossil fuels, even television, along with other seemingly innocuous conveniences which have been shown to be conveniently helpful, but on other levels harmful. Some of these harmful effects are obvious and tragic, while others are more subtle and insidious. Even seemingly innocuous advances such as the elevator, remote controls, credit card gas pumps, dishwashers, and drive-through everything, have all had unintended negative effects. They all save time and energy, but the energy they save may dissuade us from using our physical bodies as they were designed to be used. In short we have convenience ourselves to a sedentary lifestyle. Technology is amoral; it is not inherently good or evil, but it is impact on the manner in which we live our lives. American’s love technology and for some of us this trust and blind faith almost parallels a religious fanaticism. Perhaps most of all, we love it Running Head: Virtual Addiction because of the hope for the future it promises; it is this promise of a better today and a longer tomorrow which captivates us to attend to the call for new better things to come. We live in the age were computer and digital technology are always on the cusp of great things-Newer, better ways of doing things (which in some ways is true). The old becomes obsolete within a year or two. Newer is always better. Computers and the Internet purport to make our lives easier, simpler, and therefore more fulfilling, but it may not be that simple. People have become physically and psychologically dependent on many behaviors and substances for centuries. This compulsive pattern does not reflect a casual interest, but rather consists of a driven pattern of use that can frequently escalate to negatively impact our lives. The key life-areas that seem to be impacted are marriages and relationships, employment, health, and legal/financial status. The fact that substances, such as alcohol and other mood-altering drugs can create a physical and/or psychological dependence is well known and accepted. And certain behaviors such as gambling, eating, work, exercise, shopping, and sex have gained more recent acceptance with regard to their addictive potential. More recently however, there has been an acknowledgement that the compulsive performance of these behaviors may mimic the compulsive process found with drugs, alcohol and other substances. This same process appears to also be found with certain aspects of the Internet. Running Head: Virtual Addiction The Internet can and does produce clear alterations in mood; nearly 30 percent of Internet users admit to using the Net to alter their mood so as to relieve a negative mood state. In other words, they use the Internet like a drug (Greenfield, 1999). In addressing the phenomenon of Internet behavior, initial behavioral research (Young, 1996, 1998) focused on conceptual definitions of Internet use and abuse, and demonstrated similar patterns of abuse as found in compulsive gambling. There have been further recent studies on the nature and effects of the Internet. Cooper, Scherer, Boies, and Gordon (1998) examined sexuality on the Internet utilizing an extensive online survey of 9,177 Web users, and Greenfield (1999) surveyed nearly 18,000 Web users on ABCNEWS.com to examine Internet use and abuse behavior. The later study did yield some interesting trends and patterns, but also raised further areas that require clarification. There has been very little research that actually examined and measured specific behavior related to Internet use. The Carnegie Mellon University study (Kraut, Patterson, Lundmark, Kiesler, Mukopadhyay, and Scherlis, 1998) did attempt to examine and verify actual Internet use among 173 people in 73 households. This initial study did seem to demonstrate that there may be some deleterious effects from heavy Internet use, which appeared to increase some measures of social isolation and depression. What seems to be abundantly clear from the limited research to date is that we know very little about the human/Internet interface. Theoretical suppositions abound, but we are only just beginning to understand the nature and implications of Internet use and Running Head: Virtual Addiction abuse. There is an abundance of clinical, legal, and anecdotal evidence to suggest that there is something unique about being online that seems to produce a powerful impact on people. It is my belief that as we expand our analysis of this new and exciting area we will likely discover that there are many subcategories of Internet abuse, some of which will undoubtedly exist as concomitant disorders alongside of other addictions including sex, gambling, and compulsive shopping/spending. There are probably two types of Internet based problems: the first is defined as a primary problem where the Internet itself becomes the focus on the compulsive pattern, and secondary, where a preexisting problem (or compulsive behavior) is exacerbated via the use of the Internet. In a secondary problem, necessity is no longer the mother of invention, but rather convenience is. The Internet simply makes everything easier to acquire, and therefore that much more easily abused. The ease of access, availability, low cost, anonymity, timelessness, disinhibition, and loss of boundaries all appear to contribute to the total Internet experience. This has particular relevance when it comes to well-established forms of compulsive consumer behavior such as gambling, shopping, stock trading, and compulsive sexual behavior where traditional modalities of engaging in these behaviors pale in comparison to the speed and efficiency of the Internet. There has been considerable debate regarding the terms and definitions in describing pathological Internet behavior. Many terms have been used, including Internet abuse, Internet addiction, and compulsive Internet use. The concern over terminology Running Head: Virtual Addiction seems spurious to me, as it seems irrelevant as to what the addictive process is labeled. The underlying neurochemical changes (probably Dopamine) that occur during any pleasurable act have proven themselves to be potentially habit-forming on a brainbehavior level. The net effect is ultimately the same with regard to potential life impact, which in the case of compulsive behavior can be quite large. Any time there is a highly pleasurable human behavior that can be acquired without human interface (as can be accomplished on the Net) there seems to be greater potential for abuse. The ease of purchasing a stock, gambling, or shopping online allows for a boundless and disinhibited experience. Without the normal human interaction there is a far greater likelihood of abusive and/or compulsive behavior in these areas. Research in the field of Internet behavior is in its relative infancy. This is in part due to the fact that the depth and breadth of the Internet and World Wide Web are changing at exponential rates. With thousands of new subscribers a day and approaching (perhaps exceeding) 200 million worldwide users, the Internet represents a communications, social, and economic revolution. The Net now serves at the pinnacle of the digital industrial revolution, and with any revolution come new problems and difficulties.",
"title": ""
},
{
"docid": "a2314ce56557135146e43f0d4a02782d",
"text": "This paper proposes a carrier-based pulse width modulation (CB-PWM) method with synchronous switching technique for a Vienna rectifier. In this paper, a Vienna rectifier is one of the 3-level converter topologies. It is similar to a 3-level T-type topology the used back-to-back switches. When CB-PWM switching method is used, a Vienna rectifier is operated with six PWM signals. On the other hand, when the back-to-back switches are synchronized, PWM signals can be reduced to three from six. However, the synchronous switching method has a problem that the current distortion around zero-crossing point is worse than one of the conventional CB-PWM switching method. To improve current distortions, this paper proposes a reactive current injection technique. The performance and effectiveness of the proposed synchronous switching method are verified by simulation with a 5-kW Vienna rectifier.",
"title": ""
},
{
"docid": "faaa921bce23eeca714926acb1901447",
"text": "This paper provides an overview along with our findings of the Chinese Spelling Check shared task at NLPTEA 2017. The goal of this task is to develop a computerassisted system to automatically diagnose typing errors in traditional Chinese sentences written by students. We defined six types of errors which belong to two categories. Given a sentence, the system should detect where the errors are, and for each detected error determine its type and provide correction suggestions. We designed, constructed, and released a benchmark dataset for this task.",
"title": ""
},
{
"docid": "b6f0c5a136de9b85899814a436e7a497",
"text": "The 'ferrule effect' is a long standing, accepted concept in dentistry that is a foundation principle for the restoration of teeth that have suffered advanced structure loss. A review of the literature based on a search in PubMed was performed looking at the various components of the ferrule effect, with particular attention to some of the less explored dimensions that influence the effectiveness of the ferrule when restoring severely broken down teeth. These include the width of the ferrule, the effect of a partial ferrule, the influence of both, the type of the restored tooth and the lateral loads present as well as the well established 2 mm ferrule height rule. The literature was collaborated and a classification based on risk assessment was derived from the available evidence. The system categorises teeth according to the effectiveness of ferrule effect that can be achieved based on the remaining amount of sound tooth structure. Furthermore, risk assessment for failure can be performed so that the practitioner and patient can better understand the prognosis of restoring a particular tooth. Clinical recommendations were extrapolated and presented as guidelines so as to improve the predictability and outcome of treatment when restoring structurally compromised teeth. The evidence relating to restoring the endodontic treated tooth with extensive destruction is deficient. This article aims to rethink ferrule by looking at other aspects of this accepted concept, and proposes a paradigm shift in the way it is thought of and utilised.",
"title": ""
},
{
"docid": "0a1f6c27cd13735858e7a6686fc5c2c9",
"text": "We address the problem of learning hierarchical deep neural network policies for reinforcement learning. In contrast to methods that explicitly restrict or cripple lower layers of a hierarchy to force them to use higher-level modulating signals, each layer in our framework is trained to directly solve the task, but acquires a range of diverse strategies via a maximum entropy reinforcement learning objective. Each layer is also augmented with latent random variables, which are sampled from a prior distribution during the training of that layer. The maximum entropy objective causes these latent variables to be incorporated into the layer’s policy, and the higher level layer can directly control the behavior of the lower layer through this latent space. Furthermore, by constraining the mapping from latent variables to actions to be invertible, higher layers retain full expressivity: neither the higher layers nor the lower layers are constrained in their behavior. Our experimental evaluation demonstrates that we can improve on the performance of single-layer policies on standard benchmark tasks simply by adding additional layers, and that our method can solve more complex sparse-reward tasks by learning higher-level policies on top of high-entropy skills optimized for simple low-level objectives.",
"title": ""
},
{
"docid": "856a6fa093e0cf6e0512d83e1382d3c9",
"text": "00Month2017 CORRIGENDUM: ACMG recommendations for reporting of incidental findings in clinical exome and genome sequencing Robert C. Green MD, MPH, Jonathan S. Berg MD, PhD, Wayne W. Grody MD, PhD, Sarah S. Kalia ScM, CGC, Bruce R. Korf MD, PhD, Christa L. Martin PhD, FACMG, Amy L. McGuire JD, PhD, Robert L. Nussbaum MD, Julianne M. O’Daniel MS, CGC, Kelly E. Ormond MS, CGC, Heidi L. Rehm PhD, FACMG, Michael S. Watson PhD, FACMG, Marc S. Williams MD, FACMG & Leslie G. Biesecker MD Genet Med (2013) 15, 565–574 doi:10.1038/gim.2013.73 In the published version of this paper, on page 567, on the 16th line in the last paragraph of the left column, the abbreviation of Expected Pathogenic is incorrect. The correct sentence should read, “For the purposes of these recommendations, variants fitting these descriptions were labeled as Known Pathogenic (KP) and Expected Pathogenic (EP), respectively.”",
"title": ""
},
{
"docid": "665f109e8263b687764de476befcbab9",
"text": "In this work we analyze the behavior on a company-internal social network site to determine which interaction patterns signal closeness between colleagues. Regression analysis suggests that employee behavior on social network sites (SNSs) reveals information about both professional and personal closeness. While some factors are predictive of general closeness (e.g. content recommendations), other factors signal that employees feel personal closeness towards their colleagues, but not professional closeness (e.g. mutual profile commenting). This analysis contributes to our understanding of how SNS behavior reflects relationship multiplexity: the multiple facets of our relationships with SNS connections.",
"title": ""
},
{
"docid": "91b924c8dbb22ca4593150c5fadfd38b",
"text": "This paper investigates the power allocation problem of full-duplex cooperative non-orthogonal multiple access (FD-CNOMA) systems, in which the strong users relay data for the weak users via a full duplex relaying mode. For the purpose of fairness, our goal is to maximize the minimum achievable user rate in a NOMA user pair. More specifically, we consider the power optimization problem for two different relaying schemes, i.e., the fixed relaying power scheme and the adaptive relaying power scheme. For the fixed relaying scheme, we demonstrate that the power allocation problem is quasi-concave and a closed-form optimal solution is obtained. Then, based on the derived results of the fixed relaying scheme, the optimal power allocation policy for the adaptive relaying scheme is also obtained by transforming the optimization objective function as a univariate function of the relay transmit power $P_R$. Simulation results show that the proposed FD- CNOMA scheme with adaptive relaying can always achieve better or at least the same performance as the conventional NOMA scheme. In addition, there exists a switching point between FD-CNOMA and half- duplex cooperative NOMA.",
"title": ""
},
{
"docid": "159e040b0e74ad1b6124907c28e53daf",
"text": "People (pedestrians, drivers, passengers in public transport) use different services on small mobile gadgets on a daily basis. So far, mobile applications don't react to context changes. Running services should adapt to the changing environment and new services should be installed and deployed automatically. We propose a classification of context elements that influence the behavior of the mobile services, focusing on the challenges of the transportation domain. Malware Detection on Mobile Devices Asaf Shabtai*, Ben-Gurion University, Israel Abstract: We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. We present various approaches for mitigating malware on mobile devices which we have implemented and evaluated on Google Android. Our work is divided into the following three segments: a host-based intrusion detection framework; an implementation of SELinux in Android; and static analysis of Android application files. Dynamic Approximative Data Caching in Wireless Sensor Networks Nils Hoeller*, IFIS, University of Luebeck Abstract: Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations Communication in Wireless Sensor Networks generally is the most energy consuming task. Retrieving query results from deep within the sensor network therefore consumes a lot of energy and hence shortens the network's lifetime. In this work optimizations for processing queries by using adaptive caching structures are discussed. Results can be retrieved from caches that are placed nearer to the query source. As a result the communication demand is reduced and hence energy is saved by using the cached results. To verify cache coherence in networks with non-reliable communication channels, an approximate update policy is presented. A degree of result quality can be defined for a query to find the adequate cache adaptively. Gossip-based Data Fusion Framework for Radio Resource Map Jin Yang*, Ilmenau University of Technology Abstract: In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. In disaster scenarios, sensor networks are used to detect changes and estimate resource availability to further support the system recovery and rescue process. In this PhD thesis, sensor networks are used to detect available radio resources in order to form a global view of the radio resource map, based on locally sensed and measured data. Data fusion and harvesting techniques are employed for the generation and maintenance of this “radio resource map.” In order to guarantee the flexibility and fault tolerance goals of disaster scenarios, a gossip protocol is used to exchange information. The radio propagation field knowledge is closely coupled to harvesting and fusion protocols in order to achieve efficient fusing of radio measurement data. For the evaluation, simulations will be used to measure the efficiency and robustness in relation to time critical applications and various deployment densities. Actual radio data measurements within the Ilmenau area are being collected for further analysis of the map quality and in order to verify simulation results. Dynamic Social Grouping Based Routing in a Mobile Ad-Hoc Network Roy Cabaniss*, Missouri S&T Abstract: Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. Trotta, University of Missouri, Kansas City, Srinivasa Vulli, Missouri University S&T The patterns of movement used by Mobile Ad-Hoc networks are application specific, in the sense that networks use nodes which travel in different paths. When these nodes are used in experiments involving social patterns, such as wildlife tracking, algorithms which detect and use these patterns can be used to improve routing efficiency. The intent of this paper is to introduce a routing algorithm which forms a series of social groups which accurately indicate a node’s regular contact patterns while dynamically shifting to represent changes to the social environment. With the social groups formed, a probabilistic routing schema is used to effectively identify which social groups have consistent contact with the base station, and route accordingly. The algorithm can be implemented dynamically, in the sense that the nodes initially have no awareness of their environment, and works to reduce overhead and message traffic while maintaining high delivery ratio. MobileSOA framework for Context-Aware Mobile Applications Aaratee Shrestha*, University of Leipzig Abstract: Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Mobile application development is more challenging when context-awareness is taken into account. This research introduces the benefit of implementing a Mobile Service Oriented Architecture (SOA). A robust mobile SOA framework is designed for building and operating lightweight and flexible Context-Aware Mobile Application (CAMA). We develop a lightweight and flexible CAMA to show dynamic integration of the systems, where all operations run smoothly in response to the rapidly changing environment using local and remote services. Keywords-service oriented architecture (SOA); mobile service; context-awareness; contextaware mobile application (CAMA). Performance Analysis of Secure Hierarchical Data Aggregation in Wireless Sensor Networks Vimal Kumar*, Missouri S&T Abstract: Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. Data aggregation is a technique used to conserve battery power in wireless sensor networks (WSN). While providing security in such a scenario it is also important that we minimize the number of security operations as they are computationally expensive, without compromising on the security. In this paper we evaluate the performance of such an end to end security algorithm. We provide our results from the implementation of the algorithm on mica2 motes and conclude how it is better than traditional hop by hop security. A Communication Efficient Framework for Finding Outliers in Wireless Sensor Networks Dylan McDonald*, MS&T Abstract: Outlier detection is a well studied problem in various fields. The unique challenges of wireless sensor networks make this problem especially challenging. Sensors can detect outliers for a plethora of reasons and these reasons need to be inferred in real time. Here, we present a new communication technique to find outliers in a wireless sensor network. Communication is minimized through controlling sensor when sensors are allowed to communicate. At the same time, minimal assumptions are made about the nature of the data set as to ",
"title": ""
},
{
"docid": "15486c4dc2dfc0f2f5ccfc0cf6197af4",
"text": "Nostalgia is a frequently experienced complex emotion, understood by laypersons in the United Kingdom and United States of America to (a) refer prototypically to fond, self-relevant, social memories and (b) be more pleasant (e.g., happy, warm) than unpleasant (e.g., sad, regretful). This research examined whether people across cultures conceive of nostalgia in the same way. Students in 18 countries across 5 continents (N = 1,704) rated the prototypicality of 35 features of nostalgia. The samples showed high levels of agreement on the rank-order of features. In all countries, participants rated previously identified central (vs. peripheral) features as more prototypical of nostalgia, and showed greater interindividual agreement regarding central (vs. peripheral) features. Cluster analyses revealed subtle variation among groups of countries with respect to the strength of these pancultural patterns. All except African countries manifested the same factor structure of nostalgia features. Additional exemplars generated by participants in an open-ended format did not entail elaboration of the existing set of 35 features. Findings identified key points of cross-cultural agreement regarding conceptions of nostalgia, supporting the notion that nostalgia is a pancultural emotion.",
"title": ""
},
{
"docid": "dc2d5f9bfe41246ae9883aa6c0537c40",
"text": "Phosphatidylinositol 3-kinases (PI3Ks) are crucial coordinators of intracellular signalling in response to extracellular stimuli. Hyperactivation of PI3K signalling cascades is one of the most common events in human cancers. In this Review, we discuss recent advances in our knowledge of the roles of specific PI3K isoforms in normal and oncogenic signalling, the different ways in which PI3K can be upregulated, and the current state and future potential of targeting this pathway in the clinic.",
"title": ""
},
{
"docid": "5b763dbb9f06ff67e44b5d38920e92bf",
"text": "With the growing popularity of the internet, everything is available at our doorstep and convenience. The rapid increase in e-commerce applications has resulted in the increased usage of the credit card for offline and online payments. Though there are various benefits of using credit cards such as convenience, instant cash, but when it comes to security credit card holders, banks, and the merchants are affected when the card is being stolen, lost or misused without the knowledge of the cardholder (Fraud activity). Streaming analytics is a time-based processing of data and it is used to enable near real-time decision making by inspecting, correlating and analyzing the data even as it is streaming into applications and database from myriad different sources. We are making use of streaming analytics to detect and prevent the credit card fraud. Rather than singling out specific transactions, our solution analyses the historical transaction data to model a system that can detect fraudulent patterns. This model is then used to analyze transactions in real-time.",
"title": ""
},
{
"docid": "fd5b9187c6720c3408b5c2324b03905d",
"text": "Recent anchor-based deep face detectors have achieved promising performance, but they are still struggling to detect hard faces, such as small, blurred and partially occluded faces. A reason is that they treat all images and faces equally, without putting more effort on hard ones; however, many training images only contain easy faces, which are less helpful to achieve better performance on hard images. In this paper, we propose that the robustness of a face detector against hard faces can be improved by learning small faces on hard images. Our intuitions are (1) hard images are the images which contain at least one hard face, thus they facilitate training robust face detectors; (2) most hard faces are small faces and other types of hard faces can be easily converted to small faces by shrinking. We build an anchor-based deep face detector, which only output a single feature map with small anchors, to specifically learn small faces and train it by a novel hard image mining strategy. Extensive experiments have been conducted on WIDER FACE, FDDB, Pascal Faces, and AFW datasets to show the effectiveness of our method. Our method achieves APs of 95.7, 94.9 and 89.7 on easy, medium and hard WIDER FACE val dataset respectively, which surpass the previous state-of-the-arts, especially on the hard subset. Code and model are available at https://github.com/bairdzhang/smallhardface.",
"title": ""
},
{
"docid": "221b5ba25bff2522ab3ca65ffc94723f",
"text": "This paper describes the design and implementation of HERD, a key-value system designed to make the best use of an RDMA network. Unlike prior RDMA-based key-value systems, HERD focuses its design on reducing network round trips while using efficient RDMA primitives; the result is substantially lower latency, and throughput that saturates modern, commodity RDMA hardware.\n HERD has two unconventional decisions: First, it does not use RDMA reads, despite the allure of operations that bypass the remote CPU entirely. Second, it uses a mix of RDMA and messaging verbs, despite the conventional wisdom that the messaging primitives are slow. A HERD client writes its request into the server's memory; the server computes the reply. This design uses a single round trip for all requests and supports up to 26 million key-value operations per second with 5μs average latency. Notably, for small key-value items, our full system throughput is similar to native RDMA read throughput and is over 2X higher than recent RDMA-based key-value systems. We believe that HERD further serves as an effective template for the construction of RDMA-based datacenter services.",
"title": ""
},
{
"docid": "a6bc752bd6a4fc070fa01a5322fb30a1",
"text": "The formulation of a generalized area-based confusion matrix for exploring the accuracy of area estimates is presented. The generalized confusion matrix is appropriate for both traditional classi cation algorithms and sub-pixel area estimation models. An error matrix, derived from the generalized confusion matrix, allows the accuracy of maps generated using area estimation models to be assessed quantitatively and compared to the accuracies obtained from traditional classi cation techniques. The application of this approach is demonstrated for an area estimation model applied to Landsat data of an urban area of the United Kingdom.",
"title": ""
},
{
"docid": "449f984469b40fe10f7a2e0e3a359d1d",
"text": "The correlation of phenotypic outcomes with genetic variation and environmental factors is a core pursuit in biology and biomedicine. Numerous challenges impede our progress: patient phenotypes may not match known diseases, candidate variants may be in genes that have not been characterized, model organisms may not recapitulate human or veterinary diseases, filling evolutionary gaps is difficult, and many resources must be queried to find potentially significant genotype-phenotype associations. Non-human organisms have proven instrumental in revealing biological mechanisms. Advanced informatics tools can identify phenotypically relevant disease models in research and diagnostic contexts. Large-scale integration of model organism and clinical research data can provide a breadth of knowledge not available from individual sources and can provide contextualization of data back to these sources. The Monarch Initiative (monarchinitiative.org) is a collaborative, open science effort that aims to semantically integrate genotype-phenotype data from many species and sources in order to support precision medicine, disease modeling, and mechanistic exploration. Our integrated knowledge graph, analytic tools, and web services enable diverse users to explore relationships between phenotypes and genotypes across species.",
"title": ""
},
{
"docid": "e0ec608baa5af1c35672efbccbd618df",
"text": "The “similarity-attraction” effect stands as one of the most well-known findings in social psychology. However, some research contends that perceived but not actual similarity influences attraction. The current study is the first to examine the effects of actual and perceived similarity simultaneously during a face-to-face initial romantic encounter. Participants attending a speed-dating event interacted with ∼12 members of the opposite sex for 4 min each. Actual and perceived similarity for each pair were calculated from questionnaire responses assessed before the event and after each date. Data revealed that perceived, but not actual, similarity significantly predicted romantic liking in this speed-dating context. Furthermore, perceived similarity was a far weaker predictor of attraction when assessed using specific traits rather than generally. Over the past 60 years, researchers have examined thoroughly the role that similarity between partners plays in predicting interpersonal attraction. Until recently, the general consensus has been that participants report stronger attraction to objectively similar others (i.e., actual similarity) than to those with whom they share fewer traits, beliefs, and/or attitudes. The similarity-attraction effect, commonly dubbed “Byrne’s law of attraction” or “Byrne’s law of similarity,” is a central Natasha D. Tidwell, Department of Psychology, Texas A&M University; Paul W. Eastwick, Department of Psychology, Texas A&M University; Eli J. Finkel, Department of Psychology, Northwestern University. We thank Jacob Matthews for his masterful programming of the Northwestern Speed-dating Study and the Northwestern Speed-Dating Team for conducting the studies themselves. We also thank David Kenny for his assistance with the social relations model analyses. Correspondence should be addressed to Natasha D. Tidwell, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 778434235, e-mail: [email protected] or Paul W. Eastwick, Texas A&M University, Department of Psychology, 4235 TAMU, College Station, TX 77843-4235, e-mail: [email protected]. feature of textbook reviews of attraction and relationship initiation.1 Research on the actual similarity-attraction effect has most frequently examined similarity of attitudes, finding that participants are more likely to become attracted to a stranger with whom they share many common attitudes than to one with whom they share few (Byrne, 1961; Byrne, Ervin, & Lamberth, 1970). Scholars have also found that actual similarity of personality traits predicts initial attraction, but the results are not as robust as those for attitude similarity (Klohnen & Luo, 2003). Furthermore, some research has suggested that actual similarity in external qualities (e.g., age, hairstyle) is more predictive of 1. Researchers have also found that actual similarity predicts satisfaction and stability in existing relationships (e.g., Gaunt, 2006; Luo et al., 2008; Luo & Klohnen, 2005), suggesting that Byrne’s law of attraction may extend well beyond initial attraction per se. Although we review prior work on similarity in both initial attraction and established relationship contexts below, the present data specifically examine the association between similarity and attraction in an initial face-toface encounter.",
"title": ""
}
] | scidocsrr |
811485a5cf46d72e029480ba51b2cbbe | Determining the Chemical Compositions of Garlic Plant and its Existing Active Element | [
{
"docid": "85e63b1689e6fd77cdfc1db191ba78ee",
"text": "Singh VK, Singh DK. Pharmacological Effects of Garlic (Allium sativum L.). ARBS Annu Rev Biomed Sci 2008;10:6-26. Garlic (Allium sativum L.) is a bulbous herb used as a food item, spice and medicine in different parts of the world. Its medicinal use is based on traditional experience passed from generation to generation. Researchers from various disciplines are now directing their efforts towards discovering the effects of garlic on human health. Interest in garlic among researchers, particularly those in medical profession, has stemmed from the search for a drug that has a broad-spectrum therapeutic effect with minimal toxicity. Recent studies indicate that garlic extract has antimicrobial activity against many genera of bacteria, fungi and viruses. The role of garlic in preventing cardiovascular disease has been acclaimed by several authors. Chemical constituents of garlic have been investigated for treatment of hyperlipidemia, hypertension, platelet aggregation and blood fibrinolytic activity. Experimental data indicate that garlic may have anticarcinogenic effect. Recent researches in the area of pest control show that garlic has strong insecticidal, nematicidal, rodenticidal and molluscicidal activity. Despite field trials and laboratory experiments on the pesticidal activity of garlic have been conducted, more studies on the way of delivery in environment and mode of action are still recommended for effective control of pest. Adverse effects of oral ingestion and topical exposure of garlic include body odor, allergic reactions, acceleration in the effects of anticoagulants and reduction in the efficacy of anti-AIDS drug Saquinavir. ©by São Paulo State University ISSN 1806-8774",
"title": ""
}
] | [
{
"docid": "2b3335d6fb1469c4848a201115a78e2c",
"text": "Laser grooving is used for the singulation of advanced CMOS wafers since it is believed that it exerts lower mechanical stress than traditional blade dicing. The very local heating of wafers, however, might result in high thermal stress around the heat affected zone. In this work we present a model to predict the temperature distribution, material removal, and the resulting stress, in a sandwiched structure of metals and dielectric materials that are commonly found in the back-end of line of semiconductor wafers. Simulation results on realistic three dimensional back-end structures reveal that the presence of metals clearly affects both the ablation depth, and the stress in the material. Experiments showed a similar observation for the ablation depth. The shape of the crater, however, was found to be more uniform than predicted by simulations, which is probably due to the redistribution of molten metal.",
"title": ""
},
{
"docid": "e273298153872073e463662b5d6d8931",
"text": "The lack of readily-available large corpora of aligned monolingual sentence pairs is a major obstacle to the development of Statistical Machine Translation-based paraphrase models. In this paper, we describe the use of annotated datasets and Support Vector Machines to induce larger monolingual paraphrase corpora from a comparable corpus of news clusters found on the World Wide Web. Features include: morphological variants; WordNet synonyms and hypernyms; loglikelihood-based word pairings dynamically obtained from baseline sentence alignments; and formal string features such as word-based edit distance. Use of this technique dramatically reduces the Alignment Error Rate of the extracted corpora over heuristic methods based on position of the sentences in the text.",
"title": ""
},
{
"docid": "52c9ee7e057ff9ade5daf44ea713e889",
"text": "In this work, we present a novel peak-piloted deep network (PPDN) that uses a sample with peak expression (easy sample) to supervise the intermediate feature responses for a sample of non-peak expression (hard sample) of the same type and from the same subject. The expression evolving process from nonpeak expression to peak expression can thus be implicitly embedded in the network to achieve the invariance to expression intensities.",
"title": ""
},
{
"docid": "2a827e858bf93cd5edba7feb3c0448f9",
"text": "Kinetic analyses (joint moments, powers and work) of the lower limbs were performed during normal walking to determine what further information can be gained from a three-dimensional model over planar models. It was to be determined whether characteristic moment and power profiles exist in the frontal and transverse planes across subjects and how much work was performed in these planes. Kinetic profiles from nine subjects were derived using a three-dimensional inverse dynamics model of the lower limbs and power profiles were then calculated by a dot product of the angular velocities and joint moments resolved in a global reference system. Characteristic joint moment profiles across subjects were found for the hip, knee and ankle joints in all planes except for the ankle frontal moment. As expected, the major portion of work was performed in the plane of progression since the goal of locomotion is to support the body against gravity while generating movements which propel the body forward. However, the results also showed that substantial work was done in the frontal plane by the hip during walking (23% of the total work at that joint). The characteristic joint profiles suggest defined motor patterns and functional roles in the frontal and transverse planes. Kinetic analysis in three dimensions is necessary particularly if the hip joint is being examined as a substantial amount of work was done in the frontal plane of the hip to control the pelvis and trunk against gravitational forces.",
"title": ""
},
{
"docid": "34bd41f7384d6ee4d882a39aec167b3e",
"text": "This paper presents a robust feedback controller for ball and beam system (BBS). The BBS is a nonlinear system in which a ball has to be balanced on a particular beam position. The proposed nonlinear controller designed for the BBS is based upon Backstepping control technique which guarantees the boundedness of tracking error. To tackle the unknown disturbances, an external disturbance estimator (EDE) has been employed. The stability analysis of the overall closed loop robust control system has been worked out in the sense of Lyapunov theory. Finally, the simulation studies have been done to demonstrate the suitability of proposed scheme.",
"title": ""
},
{
"docid": "4a837ccd9e392f8c7682446d9a3a3743",
"text": "This paper investigates the applicability of Genetic Programming type systems to dynamic game environments. Grammatical Evolution was used to evolve Behaviour Trees, in order to create controllers for the Mario AI Benchmark. The results obtained reinforce the applicability of evolutionary programming systems to the development of artificial intelligence in games, and in dynamic systems in general, illustrating their viability as an alternative to more standard AI techniques.",
"title": ""
},
{
"docid": "d563b025b084b53c30afba4211870f2d",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "5399b924cdf1d034a76811360b6c018d",
"text": "Psychological construction models of emotion state that emotions are variable concepts constructed by fundamental psychological processes, whereas according to basic emotion theory, emotions cannot be divided into more fundamental units and each basic emotion is represented by a unique and innate neural circuitry. In a previous study, we found evidence for the psychological construction account by showing that several brain regions were commonly activated when perceiving different emotions (i.e. a general emotion network). Moreover, this set of brain regions included areas associated with core affect, conceptualization and executive control, as predicted by psychological construction models. Here we investigate directed functional brain connectivity in the same dataset to address two questions: 1) is there a common pathway within the general emotion network for the perception of different emotions and 2) if so, does this common pathway contain information to distinguish between different emotions? We used generalized psychophysiological interactions and information flow indices to examine the connectivity within the general emotion network. The results revealed a general emotion pathway that connects neural nodes involved in core affect, conceptualization, language and executive control. Perception of different emotions could not be accurately classified based on the connectivity patterns from the nodes of the general emotion pathway. Successful classification was achieved when connections outside the general emotion pathway were included. We propose that the general emotion pathway functions as a common pathway within the general emotion network and is involved in shared basic psychological processes across emotions. However, additional connections within the general emotion network are required to classify different emotions, consistent with a constructionist account.",
"title": ""
},
{
"docid": "485b48bb7b489d2be73de84994a16e42",
"text": "This paper presents Conflux, a fast, scalable and decentralized blockchain system that optimistically process concurrent blocks without discarding any as forks. The Conflux consensus protocol represents relationships between blocks as a direct acyclic graph and achieves consensus on a total order of the blocks. Conflux then, from the block order, deterministically derives a transaction total order as the blockchain ledger. We evaluated Conflux on Amazon EC2 clusters with up to 20k full nodes. Conflux achieves a transaction throughput of 5.76GB/h while confirming transactions in 4.5-7.4 minutes. The throughput is equivalent to 6400 transactions per second for typical Bitcoin transactions. Our results also indicate that when running Conflux, the consensus protocol is no longer the throughput bottleneck. The bottleneck is instead at the processing capability of individual nodes.",
"title": ""
},
{
"docid": "73e398a5ae434dbd2a10ddccd2cfb813",
"text": "Face alignment aims to estimate the locations of a set of landmarks for a given image. This problem has received much attention as evidenced by the recent advancement in both the methodology and performance. However, most of the existing works neither explicitly handle face images with arbitrary poses, nor perform large-scale experiments on non-frontal and profile face images. In order to address these limitations, this paper proposes a novel face alignment algorithm that estimates both 2D and 3D landmarks and their 2D visibilities for a face image with an arbitrary pose. By integrating a 3D point distribution model, a cascaded coupled-regressor approach is designed to estimate both the camera projection matrix and the 3D landmarks. Furthermore, the 3D model also allows us to automatically estimate the 2D landmark visibilities via surface normal. We use a substantially larger collection of all-pose face images to evaluate our algorithm and demonstrate superior performances than the state-of-the-art methods.",
"title": ""
},
{
"docid": "e7b7c37a340b4a22dddff59fc6651218",
"text": "Different types of printing methods have recently attracted interest as emerging technologies for fabrication of drug delivery systems. If printing is combined with different oral film manufacturing technologies such as solvent casting and other techniques, multifunctional structures can be created to enable further complexity and high level of sophistication. This review paper intends to provide profound understanding and future perspectives for the potential use of printing technologies in the preparation of oral film formulations as novel drug delivery systems. The described concepts include advanced multi-layer coatings, stacked systems, and integrated bioactive multi-compartments, which comprise of integrated combinations of diverse materials to form sophisticated bio-functional constructs. The advanced systems enable tailored dosing for individual drug therapy, easy and safe manufacturing of high-potent drugs, development and manufacturing of fixed-dose combinations and product tracking for anti-counterfeiting strategies.",
"title": ""
},
{
"docid": "6082c0252dffe7903512e36f13da94eb",
"text": "Thousands of storage tanks in oil refineries have to be inspected manually to prevent leakage and/or any other potential catastrophe. A wall climbing robot with permanent magnet adhesion mechanism equipped with nondestructive sensor has been designed. The robot can be operated autonomously or manually. In autonomous mode the robot uses an ingenious coverage algorithm based on distance transform function to navigate itself over the tank surface in a back and forth motion to scan the external wall for the possible faults using sensors without any human intervention. In manual mode the robot can be navigated wirelessly from the ground station to any location of interest. Preliminary experiment has been carried out to test the prototype.",
"title": ""
},
{
"docid": "45a15455945fdd03ee726b285b8dd75a",
"text": "The nonequispaced Fourier transform arises in a variety of application areas, from medical imaging to radio astronomy to the numerical solution of partial differential equations. In a typical problem, one is given an irregular sampling of N data in the frequency domain and one is interested in reconstructing the corresponding function in the physical domain. When the sampling is uniform, the fast Fourier transform (FFT) allows this calculation to be computed in O(N logN) operations rather than O(N2) operations. Unfortunately, when the sampling is nonuniform, the FFT does not apply. Over the last few years, a number of algorithms have been developed to overcome this limitation and are often referred to as nonuniform FFTs (NUFFTs). These rely on a mixture of interpolation and the judicious use of the FFT on an oversampled grid [A. Dutt and V. Rokhlin, SIAM J. Sci. Comput., 14 (1993), pp. 1368–1383]. In this paper, we observe that one of the standard interpolation or “gridding” schemes, based on Gaussians, can be accelerated by a significant factor without precomputation and storage of the interpolation weights. This is of particular value in twoand threedimensional settings, saving either 10dN in storage in d dimensions or a factor of about 5–10 in CPU time (independent of dimension).",
"title": ""
},
{
"docid": "2f23d51ffd54a6502eea07883709d016",
"text": "Named entity recognition (NER) is a popular domain of natural language processing. For this reason, many tools exist to perform this task. Amongst other points, they differ in the processing method they rely upon, the entity types they can detect, the nature of the text they can handle, and their input/output formats. This makes it difficult for a user to select an appropriate NER tool for a specific situation. In this article, we try to answer this question in the context of biographic texts. For this matter, we first constitute a new corpus by annotating 247 Wikipedia articles. We then select 4 publicly available, well known and free for research NER tools for comparison: Stanford NER, Illinois NET, OpenCalais NER WS and Alias-i LingPipe. We apply them to our corpus, assess their performances and compare them. When considering overall performances, a clear hierarchy emerges: Stanford has the best results, followed by LingPipe, Illionois and OpenCalais. However, a more detailed evaluation performed relatively to entity types and article categories highlights the fact their performances are diversely influenced by those factors. This complementarity opens an interesting perspective regarding the combination of these individual tools in order to improve performance.",
"title": ""
},
{
"docid": "ce72785681a085be7f947ab6fa787b79",
"text": "A computationally implemented model of the transmission of linguistic behavior over time is presented. In this model [the iterated learning model (ILM)], there is no biological evolution, natural selection, nor any measurement of the success of the agents at communicating (except for results-gathering purposes). Nevertheless, counter to intuition, significant evolution of linguistic behavior is observed. From an initially unstructured communication system (a protolanguage), a fully compositional syntactic meaning-string mapping emerges. Furthermore, given a nonuniform frequency distribution over a meaning space and a production mechanism that prefers short strings, a realistic distribution of string lengths and patterns of stable irregularity emerges, suggesting that the ILM is a good model for the evolution of some of the fundamental features of human language.",
"title": ""
},
{
"docid": "7ba37f2dcf95f36727e1cd0f06e31cc0",
"text": "The neonate receiving parenteral nutrition (PN) therapy requires a physiologically appropriate solution in quantity and quality given according to a timely, cost-effective strategy. Maintaining tissue integrity, metabolism, and growth in a neonate is challenging. To support infant growth and influence subsequent development requires critical timing for nutrition assessment and intervention. Providing amino acids to neonates has been shown to improve nitrogen balance, glucose metabolism, and amino acid profiles. In contrast, supplying the lipid emulsions (currently available in the United States) to provide essential fatty acids is not the optimal composition to help attenuate inflammation. Recent investigations with an omega-3 fish oil IV emulsion are promising, but there is need for further research and development. Complications from PN, however, remain problematic and include infection, hepatic dysfunction, and cholestasis. These complications in the neonate can affect morbidity and mortality, thus emphasizing the preference to provide early enteral feedings, as well as medication therapy to improve liver health and outcome. Potential strategies aimed at enhancing PN therapy in the neonate are highlighted in this review, and a summary of guidelines for practical management is included.",
"title": ""
},
{
"docid": "343115505ad21c973475c12c3657d82c",
"text": "New transportation fuels are badly needed to reduce our heavy dependence on imported oil and to reduce the release of greenhouse gases that cause global climate change; cellulosic biomass is the only inexpensive resource that can be used for sustainable production of the large volumes of liquid fuels that our transportation sector has historically favored. Furthermore, biological conversion of cellulosic biomass can take advantage of the power of biotechnology to take huge strides toward making biofuels cost competitive. Ethanol production is particularly well suited to marrying this combination of need, resource, and technology. In fact, major advances have already been realized to competitively position cellulosic ethanol with corn ethanol. However, although biotechno logy presents important opportunities to achieve very low costs, pretreatment of naturally resistant cellulosic mate rials is essential if we are to achieve high yields from biological operations; this operation is projected to be the single, most expensive processing step, representing about 20% of the total cost. In addition, pretreatment has pervasive impacts on all other major operations in the overall conversion scheme from choice of feedstock through to size reduction, hydrolysis, and fermentation, and on to product recovery, residue processing, and co-product potential. A number of different pretreatments involving biological, chemical, physical, and thermal approaches have been investigated over the years, but only those that employ chemicals currently offer the high yields and low costs vital to economic success. Among the most promising are pretreatments using dilute acid, sulfur dioxide, near-neutral pH control, ammonia expansion, aqueous ammonia, and lime, with signifi cant differences among the sugar-release patterns. Although projected costs for these options are similar when applied to corn stover, a key need now is to dramatically improve our knowledge of these systems with the goal of advancing pretreatment to substantially reduce costs and to accelerate commercial applications. © 2007 Society of Chemical Industry and John Wiley & Sons, Ltd",
"title": ""
},
{
"docid": "0cccb226bb72be281ead8c614bd46293",
"text": "We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.",
"title": ""
},
{
"docid": "33c5ddb4633cc09c87b8ee26d7c54e51",
"text": "INTRODUCTION\nAdvances in technology have revolutionized the medical field and changed the way healthcare is delivered. Unmanned aerial vehicles (UAVs) are the next wave of technological advancements that have the potential to make a huge splash in clinical medicine. UAVs, originally developed for military use, are making their way into the public and private sector. Because they can be flown autonomously and can reach almost any geographical location, the significance of UAVs are becoming increasingly apparent in the medical field.\n\n\nMATERIALS AND METHODS\nWe conducted a comprehensive review of the English language literature via the PubMed and Google Scholar databases using search terms \"unmanned aerial vehicles,\" \"UAVs,\" and \"drone.\" Preference was given to clinical trials and review articles that addressed the keywords and clinical medicine.\n\n\nRESULTS\nPotential applications of UAVs in medicine are broad. Based on articles identified, we grouped UAV application in medicine into three categories: (1) Prehospital Emergency Care; (2) Expediting Laboratory Diagnostic Testing; and (3) Surveillance. Currently, UAVs have been shown to deliver vaccines, automated external defibrillators, and hematological products. In addition, they are also being studied in the identification of mosquito habitats as well as drowning victims at beaches as a public health surveillance modality.\n\n\nCONCLUSIONS\nThese preliminary studies shine light on the possibility that UAVs may help to increase access to healthcare for patients who may be otherwise restricted from proper care due to cost, distance, or infrastructure. As with any emerging technology and due to the highly regulated healthcare environment, the safety and effectiveness of this technology need to be thoroughly discussed. Despite the many questions that need to be answered, the application of drones in medicine appears to be promising and can both increase the quality and accessibility of healthcare.",
"title": ""
}
] | scidocsrr |
4e25e3351ec840be9252a4cfb9808083 | The Intentional Unintentional Agent: Learning to Solve Many Continuous Control Tasks Simultaneously | [
{
"docid": "9bcba1b3d4e63c026d1bd16bfd2c8d7b",
"text": "Developmental robotics is an emerging field located at the intersection of robotics, cognitive science and developmental sciences. This paper elucidates the main reasons and key motivations behind the convergence of fields with seemingly disparate interests, and shows why developmental robotics might prove to be beneficial for all fields involved. The methodology advocated is synthetic and two-pronged: on the one hand, it employs robots to instantiate models originating from developmental sciences; on the other hand, it aims to develop better robotic systems by exploiting insights gained from studies on ontogenetic development. This paper gives a survey of the relevant research issues and points to some future research directions. 1. Introduction Developmental robotics is an emergent area of research at the intersection of robotics and developmental sciences—in particular developmental psychology and developmental neuroscience. It constitutes an interdisciplinary and two-pronged approach to robotics, which on one side employs robots to instantiate and investigate models originating from developmental sciences, and on the other side seeks to design better robotic systems by applying insights gained from studies on ontogenetic development.",
"title": ""
},
{
"docid": "37e82a54df827ddcfdb71fef7c12a47b",
"text": "We tackle a task where an agent learns to navigate in a 2D maze-like environment called XWORLD. In each session, the agent perceives a sequence of raw-pixel frames, a natural language command issued by a teacher, and a set of rewards. The agent learns the teacher’s language from scratch in a grounded and compositional manner, such that after training it is able to correctly execute zero-shot commands: 1) the combination of words in the command never appeared before, and/or 2) the command contains new object concepts that are learned from another task but never learned from navigation. Our deep framework for the agent is trained end to end: it learns simultaneously the visual representations of the environment, the syntax and semantics of the language, and the action module that outputs actions. The zero-shot learning capability of our framework results from its compositionality and modularity with parameter tying. We visualize the intermediate outputs of the framework, demonstrating that the agent truly understands how to solve the problem. We believe that our results provide some preliminary insights on how to train an agent with similar abilities in a 3D environment.",
"title": ""
},
{
"docid": "21abc097d58698c5eae1cddab9bf884e",
"text": "Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.",
"title": ""
},
{
"docid": "955ae6e1dffbe580217b812f943b4339",
"text": "Successful applications of reinforcement learning in realworld problems often require dealing with partially observable states. It is in general very challenging to construct and infer hidden states as they often depend on the agent’s entire interaction history and may require substantial domain knowledge. In this work, we investigate a deep-learning approach to learning the representation of states in partially observable tasks, with minimal prior knowledge of the domain. In particular, we study reinforcement learning with deep neural networks, including RNN and LSTM, which are equipped with the desired property of being able to capture long-term dependency on history, and thus providing an effective way of learning the representation of hidden states. We further develop a hybrid approach that combines the strength of both supervised learning (for representing hidden states) and reinforcement learning (for optimizing control) with joint training. Extensive experiments based on a KDD Cup 1998 direct mailing campaign problem demonstrate the effectiveness and advantages of the proposed approach, which performs the best across the board.",
"title": ""
},
{
"docid": "9ec7b122117acf691f3bee6105deeb81",
"text": "We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be evaluated for different states and controls in parallel. Around 400,000 dynamics evaluations per second are possible on a 12-core machine, for a 3D homanoid with 18 dofs and 6 active contacts. We have already used the engine in a number of control applications. It will soon be made publicly available.",
"title": ""
},
{
"docid": "033ee0637607fec8ae1b5834efe355dc",
"text": "We propose a new task-specification language for Markov decision processes that is designed to be an improvement over reward functions by being environment independent. The language is a variant of Linear Temporal Logic (LTL) that is extended to probabilistic specifications in a way that permits approximations to be learned in finite time. We provide several small environments that demonstrate the advantages of our geometric LTL (GLTL) language and illustrate how it can be used to specify standard reinforcementlearning tasks straightforwardly.",
"title": ""
}
] | [
{
"docid": "d422afa99137d5e09bd47edeb770e872",
"text": "OBJECTIVE\nFood Insecurity (FI) occurs in 21% of families with children and adolescents in the United States, but the potential developmental and behavioral implications of this prevalent social determinant of health have not been comprehensively elucidated. This systematic review aims to examine the association between FI and childhood developmental and behavioral outcomes in western industrialized countries.\n\n\nMETHOD\nThis review provides a critical summary of 23 peer reviewed articles from developed countries on the associations between FI and adverse childhood developmental behavioral outcomes including early cognitive development, academic performance, inattention, externalizing behaviors, and depression in 4 groups-infants and toddlers, preschoolers, school age, and adolescents. Various approaches to measuring food insecurity are delineated. Potential confounding and mediating variables of this association are compared across studies. Alternate explanatory mechanisms of observed effects and need for further research are discussed.\n\n\nRESULTS\nThis review demonstrates that household FI, even at marginal levels, is associated with children's behavioral, academic, and emotional problems from infancy to adolescence across western industrialized countries - even after controlling for confounders.\n\n\nCONCLUSIONS\nWhile the American Academy of Pediatrics already recommends routine screening for food insecurity during health maintenance visits, the evidence summarized here should encourage developmental behavioral health providers to screen for food insecurity in their practices and intervene when possible. Conversely, children whose families are identified as food insecure in primary care settings warrant enhanced developmental behavioral assessment and possible intervention.",
"title": ""
},
{
"docid": "31e3fddcaeb7e4984ba140cb30ff49bf",
"text": "We show that a maximum-weight triangle in an undirected graph with n vertices and real weights assigned to vertices can be found in time O(nω + n2+o(1)), where ω is the exponent of the fastest matrix multiplication algorithm. By the currently best bound on ω, the running time of our algorithm is O(n2.376). Our algorithm substantially improves the previous time-bounds for this problem, and its asymptotic time complexity matches that of the fastest known algorithm for finding any triangle (not necessarily a maximum-weight one) in a graph. We can extend our algorithm to improve the upper bounds on finding a maximum-weight triangle in a sparse graph and on finding a maximum-weight subgraph isomorphic to a fixed graph. We can find a maximum-weight triangle in a vertex-weighted graph with m edges in asymptotic time required by the fastest algorithm for finding any triangle in a graph with m edges, i.e., in time O(m1.41). Our algorithms for a maximum-weight fixed subgraph (in particular any clique of constant size) are asymptotically as fast as the fastest known algorithms for a fixed subgraph.",
"title": ""
},
{
"docid": "2e964b14ff4e45e3f1c339d7247a50d0",
"text": "We report a method to additively build threedimensional (3-D) microelectromechanical systems (MEMS) and electrical circuitry by ink-jet printing nanoparticle metal colloids. Fabricating metallic structures from nanoparticles avoids the extreme processing conditions required for standard lithographic fabrication and molten-metal-droplet deposition. Nanoparticles typically measure 1 to 100 nm in diameter and can be sintered at plastic-compatible temperatures as low as 300 C to form material nearly indistinguishable from the bulk material. Multiple ink-jet print heads mounted to a computer-controlled 3-axis gantry deposit the 10% by weight metal colloid ink layer-by-layer onto a heated substrate to make two-dimensional (2-D) and 3-D structures. We report a high-Q resonant inductive coil, linear and rotary electrostatic-drive motors, and in-plane and vertical electrothermal actuators. The devices, printed in minutes with a 100 m feature size, were made out of silver and gold material with high conductivity,and feature as many as 400 layers, insulators, 10 : 1 vertical aspect ratios, and etch-released mechanical structure. These results suggest a route to a desktop or large-area MEMS fabrication system characterized by many layers, low cost, and data-driven fabrication for rapid turn-around time, and represent the first use of ink-jet printing to build active MEMS. [657]",
"title": ""
},
{
"docid": "31da7acfb9d98421bbf7e70a508ba5df",
"text": "Habronema muscae (Spirurida: Habronematidae) occurs in the stomach of equids, is transmitted by adult muscid dipterans and causes gastric habronemiasis. Scanning electron microscopy (SEM) was used to study the morphological aspects of adult worms of this nematode in detail. The worms possess two trilobed lateral lips. The buccal cavity was cylindrical, with thick walls and without teeth. Around the mouth, four submedian cephalic papillae and two amphids were seen. A pair of lateral cervical papillae was present. There was a single lateral ala and in the female the vulva was situated in the middle of the body. In the male, there were wide caudal alae, and the spicules were unequal and dissimilar. At the posterior end of the male, four pairs of stalked precloacal papillae, unpaired post-cloacal papillae and a cluster of small papillae were present. In one case, the anterior end showed abnormal features.",
"title": ""
},
{
"docid": "f9fd7fc57dfdfbfa6f21dc074c9e9daf",
"text": "Recently, Lin and Tsai proposed an image secret sharing scheme with steganography and authentication to prevent participants from the incidental or intentional provision of a false stego-image (an image containing the hidden secret image). However, dishonest participants can easily manipulate the stego-image for successful authentication but cannot recover the secret image, i.e., compromise the steganography. In this paper, we present a scheme to improve authentication ability that prevents dishonest participants from cheating. The proposed scheme also defines the arrangement of embedded bits to improve the quality of stego-image. Furthermore, by means of the Galois Field GF(2), we improve the scheme to a lossless version without additional pixels. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "c6ef33607a015c4187ac77b18d903a8a",
"text": "OBJECTIVE\nA systematic review was conducted to identify effective intervention strategies for communication in individuals with Down syndrome.\n\n\nMETHODS\nWe updated and extended previous reviews by examining: (1) participant characteristics; (2) study characteristics; (3) characteristics of effective interventions (e.g., strategies and intensity); (4) whether interventions are tailored to the Down syndrome behavior phenotype; and (5) the effectiveness (i.e., percentage nonoverlapping data and Cohen's d) of interventions.\n\n\nRESULTS\nThirty-seven studies met inclusion criteria. The majority of studies used behavior analytic strategies and produced moderate gains in communication targets. Few interventions were tailored to the needs of the Down syndrome behavior phenotype.\n\n\nCONCLUSION\nThe results suggest that behavior analytic strategies are a promising approach, and future research should focus on replicating the effects of these interventions with greater methodological rigor.",
"title": ""
},
{
"docid": "d64c30da6f8d94ca4effd83075b15901",
"text": "The task of natural question generation is to generate a corresponding question given the input passage (fact) and answer. It is useful for enlarging the training set of QA systems. Previous work has adopted sequence-to-sequence models that take a passage with an additional bit to indicate answer position as input. However, they do not explicitly model the information between answer and other context within the passage. We propose a model that matches the answer with the passage before generating the question. Experiments show that our model outperforms the existing state of the art using rich features.",
"title": ""
},
{
"docid": "71b6f02598ac24efbc4625ca060f1bae",
"text": "Estimates of the worldwide incidence and mortality from 27 cancers in 2008 have been prepared for 182 countries as part of the GLOBOCAN series published by the International Agency for Research on Cancer. In this article, we present the results for 20 world regions, summarizing the global patterns for the eight most common cancers. Overall, an estimated 12.7 million new cancer cases and 7.6 million cancer deaths occur in 2008, with 56% of new cancer cases and 63% of the cancer deaths occurring in the less developed regions of the world. The most commonly diagnosed cancers worldwide are lung (1.61 million, 12.7% of the total), breast (1.38 million, 10.9%) and colorectal cancers (1.23 million, 9.7%). The most common causes of cancer death are lung cancer (1.38 million, 18.2% of the total), stomach cancer (738,000 deaths, 9.7%) and liver cancer (696,000 deaths, 9.2%). Cancer is neither rare anywhere in the world, nor mainly confined to high-resource countries. Striking differences in the patterns of cancer from region to region are observed.",
"title": ""
},
{
"docid": "c04db0f2e638d0f5aab528776895fdc3",
"text": "OBJECTIVE\nThis study is a detailed examination of the association between parental alcohol abuse (mother only, father only, or both parents) and multiple forms of childhood abuse, neglect, and other household dysfunction, known as adverse childhood experiences (ACEs).\n\n\nMETHOD\nA questionnaire about ACEs including child abuse, neglect, household dysfunction, and exposure to parental alcohol abuse was completed by 8629 adult HMO members to retrospectively assess the relationship of growing up with parental alcohol abuse to 10 ACEs and multiple ACEs (ACE score).\n\n\nRESULTS\nCompared to persons who grew up with no parental alcohol abuse, the adjusted odds ratio for each category of ACE was approximately 2 to 13 times higher if either the mother, father, or both parents abused alcohol (p < 0.05). For example, the likelihood of having a battered mother was increased 13-fold for men who grew up with both parents who abused alcohol (OR, 12.7; 95% CI: 8.4-19.1). For almost every ACE, those who grew up with both an alcohol-abusing mother and father had the highest likelihood of ACEs. The mean number of ACEs for persons with no parental alcohol abuse, father only, mother only, or both parents was 1.4, 2.6, 3.2, and 3.8, respectively (p < .001).\n\n\nCONCLUSION\nAlthough the retrospective reporting of these experiences cannot establish a causal association with certainty, exposure to parental alcohol abuse is highly associated with experiencing adverse childhood experiences. Improved coordination of adult and pediatric health care along with related social and substance abuse services may lead to earlier recognition, treatment, and prevention of both adult alcohol abuse and adverse childhood experiences, reducing the negative sequelae of ACEs in adolescents and adults.",
"title": ""
},
{
"docid": "7df3fe3ffffaac2fb6137fdc440eb9f4",
"text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.",
"title": ""
},
{
"docid": "ec3b78f594042c2ed9be2e7b987f8d3d",
"text": "In mammals, species with more frontally oriented orbits have broader binocular visual fields and relatively larger visual regions in the brain. Here, we test whether a similar pattern of correlated evolution is present in birds. Using both conventional statistics and modern comparative methods, we tested whether the relative size of the Wulst and optic tectum (TeO) were significantly correlated with orbit orientation, binocular visual field width and eye size in birds using a large, multi-species data set. In addition, we tested whether relative Wulst and TeO volumes were correlated with axial length of the eye. The relative size of the Wulst was significantly correlated with orbit orientation and the width of the binocular field such that species with more frontal orbits and broader binocular fields have relatively large Wulst volumes. Relative TeO volume, however, was not significant correlated with either variable. In addition, both relative Wulst and TeO volume were weakly correlated with relative axial length of the eye, but these were not corroborated by independent contrasts. Overall, our results indicate that relative Wulst volume reflects orbit orientation and possibly binocular visual field, but not eye size.",
"title": ""
},
{
"docid": "cf131167592f02790a1b4e38ed3b5375",
"text": "Monocular 3D facial shape reconstruction from a single 2D facial image has been an active research area due to its wide applications. Inspired by the success of deep neural networks (DNN), we propose a DNN-based approach for End-to-End 3D FAce Reconstruction (UH-E2FAR) from a single 2D image. Different from recent works that reconstruct and refine the 3D face in an iterative manner using both an RGB image and an initial 3D facial shape rendering, our DNN model is end-to-end, and thus the complicated 3D rendering process can be avoided. Moreover, we integrate in the DNN architecture two components, namely a multi-task loss function and a fusion convolutional neural network (CNN) to improve facial expression reconstruction. With the multi-task loss function, 3D face reconstruction is divided into neutral 3D facial shape reconstruction and expressive 3D facial shape reconstruction. The neutral 3D facial shape is class-specific. Therefore, higher layer features are useful. In comparison, the expressive 3D facial shape favors lower or intermediate layer features. With the fusion-CNN, features from different intermediate layers are fused and transformed for predicting the 3D expressive facial shape. Through extensive experiments, we demonstrate the superiority of our end-to-end framework in improving the accuracy of 3D face reconstruction.",
"title": ""
},
{
"docid": "5d63b20254e8732807a0c029cd86014f",
"text": "Various perceptual domains have underlying compositional semantics that are rarely captured in current models. We suspect this is because directly learning the compositional structure has evaded these models. Yet, the compositional structure of a given domain can be grounded in a separate domain thereby simplifying its learning. To that end, we propose a new approach to modeling bimodal percepts that explicitly relates distinct projections across each modality and then jointly learns a bimodal sparse representation. The resulting model enables compositionality across these distinct projections and hence can generalize to unobserved percepts spanned by this compositional basis. For example, our model can be trained on red triangles and blue squares; yet, implicitly will also have learned red squares and blue triangles. The structure of the projections and hence the compositional basis is learned automatically for a given language model. To test our model, we have acquired a new bimodal dataset comprising images and spoken utterances of colored shapes in a tabletop setup. Our experiments demonstrate the benefits of explicitly leveraging compositionality in both quantitative and human evaluation studies.",
"title": ""
},
{
"docid": "dec78cff9fa87a3b51fc32681ba39a08",
"text": "Alkaline saponification is often used to remove interfering chlorophylls and lipids during carotenoids analysis. However, saponification also hydrolyses esterified carotenoids and is known to induce artifacts. To avoid carotenoid artifact formation during saponification, Larsen and Christensen (2005) developed a gentler and simpler analytical clean-up procedure involving the use of a strong basic resin (Ambersep 900 OH). They hypothesised a saponification mechanism based on their Liquid Chromatography-Photodiode Array (LC-PDA) data. In the present study, we show with LC-PDA-accurate mass-Mass Spectrometry that the main chlorophyll removal mechanism is not based on saponification, apolar adsorption or anion exchange, but most probably an adsorption mechanism caused by H-bonds and dipole-dipole interactions. We showed experimentally that esterified carotenoids and glycerolipids were not removed, indicating a much more selective mechanism than initially hypothesised. This opens new research opportunities towards a much wider scope of applications (e.g. the refinement of oils rich in phytochemical content).",
"title": ""
},
{
"docid": "1297f85b22be207611dc7d944f6a378a",
"text": "Several factors make empirical research in software engineering particularly challenging as it requires studying not only technology but its stakeholders’ activities while drawing concepts and theories from social science. Researchers, in general, agree that selecting a research design in empirical software engineering research is challenging, because the implications of using individual research methods are not well recorded. The main objective of this article is to make researchers aware and support them in their research design, by providing a foundation of knowledge about empirical software engineering research decisions, in order to ensure that researchers make well-founded and informed decisions about their research designs. This article provides a decision-making structure containing a number of decision points, each one of them representing a specific aspect on empirical software engineering research. The article provides an introduction to each decision point and its constituents, as well as to the relationships between the different parts in the decision-making structure. The intention is the structure should act as a starting point for the research design before going into the details of the research design chosen. The article provides an in-depth discussion of decision points in relation to the research design when conducting empirical research.",
"title": ""
},
{
"docid": "e89acdeb493d156390851a2a57231baf",
"text": "Several approaches have recently been proposed for learning decentralized deep multiagent policies that coordinate via a differentiable communication channel. While these policies are effective for many tasks, interpretation of their induced communication strategies has remained a challenge. Here we propose to interpret agents’ messages by translating them. Unlike in typical machine translation problems, we have no parallel data to learn from. Instead we develop a translation model based on the insight that agent messages and natural language strings mean the same thing if they induce the same belief about the world in a listener. We present theoretical guarantees and empirical evidence that our approach preserves both the semantics and pragmatics of messages by ensuring that players communicating through a translation layer do not suffer a substantial loss in reward relative to players with a common language.1",
"title": ""
},
{
"docid": "7f6f39e46010238dca3da94f78a21add",
"text": "Labeling text data is quite time-consuming but essential for automatic text classification. Especially, manually creating multiple labels for each document may become impractical when a very large amount of data is needed for training multi-label text classifiers. To minimize the human-labeling efforts, we propose a novel multi-label active learning approach which can reduce the required labeled data without sacrificing the classification accuracy. Traditional active learning algorithms can only handle single-label problems, that is, each data is restricted to have one label. Our approach takes into account the multi-label information, and select the unlabeled data which can lead to the largest reduction of the expected model loss. Specifically, the model loss is approximated by the size of version space, and the reduction rate of the size of version space is optimized with Support Vector Machines (SVM). An effective label prediction method is designed to predict possible labels for each unlabeled data point, and the expected loss for multi-label data is approximated by summing up losses on all labels according to the most confident result of label prediction. Experiments on several real-world data sets (all are publicly available) demonstrate that our approach can obtain promising classification result with much fewer labeled data than state-of-the-art methods.",
"title": ""
},
{
"docid": "e2e99eca77da211cac64ab69931ed1f4",
"text": "Cross-site scripting (XSS) and SQL injection errors are two prominent examples of taint-based vulnerabilities that have been responsible for a large number of security breaches in recent years. This paper presents QED, a goal-directed model-checking system that automatically generates attacks exploiting taint-based vulnerabilities in large Java web applications. This is the first time where model checking has been used successfully on real-life Java programs to create attack sequences that consist of multiple HTTP requests. QED accepts any Java web application that is written to the standard servlet specification. The analyst specifies the vulnerability of interest in a specification that looks like a Java code fragment, along with a range of values for form parameters. QED then generates a goal-directed analysis from the specification to perform session-aware tests, optimizes to eliminate inputs that are not of interest, and feeds the remainder to a model checker. The checker will systematically explore the remaining state space and report example attacks if the vulnerability specification is matched. QED provides better results than traditional analyses because it does not generate any false positive warnings. It proves the existence of errors by providing an example attack and a program trace showing how the code is compromised. Past experience suggests this is important because it makes it easy for the application maintainer to recognize the errors and to make the necessary fixes. In addition, for a class of applications, QED can guarantee that it has found all the potential bugs in the program. We have run QED over 3 Java web applications totaling 130,000 lines of code. We found 10 SQL injections and 13 cross-site scripting errors.",
"title": ""
},
{
"docid": "7084e2455ea696eec4a0f93b3140d71b",
"text": "Reinforcement learning is a simple, and yet, comprehensive theory of learning that simultaneously models the adaptive behavior of artificial agents, such as robots and autonomous software programs, as well as attempts to explain the emergent behavior of biological systems. It also gives rise to computational ideas that provide a powerful tool to solve problems involving sequential prediction and decision making. Temporal difference learning is the most widely used method to solve reinforcement learning problems, with a rich history dating back more than three decades. For these and many other reasons, devel1 This article is currently not under review for the journal Foundations and Trends in ML, but will be submitted for formal peer review at some point in the future, once the draft reaches a stable “equilibrium” state. ar X iv :1 40 5. 67 57 v1 [ cs .L G ] 2 6 M ay 2 01 4 oping a complete theory of reinforcement learning, one that is both rigorous and useful has been an ongoing research investigation for several decades. In this paper, we set forth a new vision of reinforcement learning developed by us over the past few years, one that yields mathematically rigorous solutions to longstanding important questions that have remained unresolved: (i) how to design reliable, convergent, and robust reinforcement learning algorithms (ii) how to guarantee that reinforcement learning satisfies pre-specified “safely” guarantees, and remains in a stable region of the parameter space (iii) how to design “off-policy” temporal difference learning algorithms in a reliable and stable manner, and finally (iv) how to integrate the study of reinforcement learning into the rich theory of stochastic optimization. In this paper, we provide detailed answers to all these questions using the powerful framework of proximal operators. The most important idea that emerges is the use of primal dual spaces connected through the use of a Legendre transform. This allows temporal difference updates to occur in dual spaces, allowing a variety of important technical advantages. The Legendre transform, as we show, elegantly generalizes past algorithms for solving reinforcement learning problems, such as natural gradient methods, which we show relate closely to the previously unconnected framework of mirror descent methods. Equally importantly, proximal operator theory enables the systematic development of operator splitting methods that show how to safely and reliably decompose complex products of gradients that occur in recent variants of gradient-based temporal difference learning. This key technical innovation makes it possible to finally design “true” stochastic gradient methods for reinforcement learning. Finally, Legendre transforms enable a variety of other benefits, including modeling sparsity and domain geometry. Our work builds extensively on recent work on the convergence of saddle-point algorithms, and on the theory of monotone operators in Hilbert spaces, both in optimization and for variational inequalities. The latter framework, the subject of another ongoing investigation by our group, holds the promise of an even more elegant framework for reinforcement learning. Its explication is currently the topic of a further monograph that will appear in due course. Dedicated to Andrew Barto and Richard Sutton for inspiring a generation of researchers to the study of reinforcement learning. Algorithm 1 TD (1984) (1) δt = rt + γφ ′ t T θt − φt θt (2) θt+1 = θt + βtδt Algorithm 2 GTD2-MP (2014) (1) wt+ 1 2 = wt + βt(δt − φt wt)φt, θt+ 1 2 = proxαth ( θt + αt(φt − γφt)(φt wt) ) (2) δt+ 1 2 = rt + γφ ′ t T θt+ 1 2 − φt θt+ 1 2 (3) wt+1 = wt + βt(δt+ 1 2 − φt wt+ 1 2 )φt , θt+1 = proxαth ( θt + αt(φt − γφt)(φt wt+ 1 2 ) )",
"title": ""
},
{
"docid": "1c11c14bcc1e83a3fba3ef5e4c52d69b",
"text": "Ontologies have become the de-facto modeling tool of choice, employed in many applications and prominently in the semantic web. Nevertheless, ontology construction remains a daunting task. Ontological bootstrapping, which aims at automatically generating concepts and their relations in a given domain, is a promising technique for ontology construction. Bootstrapping an ontology based on a set of predefined textual sources, such as web services, must address the problem of multiple, largely unrelated concepts. In this paper, we propose an ontology bootstrapping process for web services. We exploit the advantage that web services usually consist of both WSDL and free text descriptors. The WSDL descriptor is evaluated using two methods, namely Term Frequency/Inverse Document Frequency (TF/IDF) and web context generation. Our proposed ontology bootstrapping process integrates the results of both methods and applies a third method to validate the concepts using the service free text descriptor, thereby offering a more accurate definition of ontologies. We extensively validated our bootstrapping method using a large repository of real-world web services and verified the results against existing ontologies. The experimental results indicate high precision. Furthermore, the recall versus precision comparison of the results when each method is separately implemented presents the advantage of our integrated bootstrapping approach.",
"title": ""
}
] | scidocsrr |
43c305b4a8dc2035524c87b78485678c | Document dissimilarity within and across languages: A benchmarking study | [
{
"docid": "5bc9b4a952465bed83b5e84d6ab2bba8",
"text": "We present a new algorithm for duplicate document detection thatuses collection statistics. We compare our approach with thestate-of-the-art approach using multiple collections. Thesecollections include a 30 MB 18,577 web document collectiondeveloped by Excite@Home and three NIST collections. The first NISTcollection consists of 100 MB 18,232 LA-Times documents, which isroughly similar in the number of documents to theExcite&at;Home collection. The other two collections are both 2GB and are the 247,491-web document collection and the TREC disks 4and 5---528,023 document collection. We show that our approachcalled I-Match, scales in terms of the number of documents andworks well for documents of all sizes. We compared our solution tothe state of the art and found that in addition to improvedaccuracy of detection, our approach executed in roughly one-fifththe time.",
"title": ""
}
] | [
{
"docid": "41b7b8638fa1d3042873ca70f9c338f1",
"text": "The LC50 (78, 85 ppm) and LC90 (88, 135 ppm) of Anagalis arvensis and Calendula micrantha respectively against Biomphalaria alexandrina were higher than those of the non-target snails, Physa acuta, Planorbis planorbis, Helisoma duryi and Melanoides tuberculata. In contrast, the LC50 of Niclosamide (0.11 ppm) and Copper sulphate (CuSO4) (0.42 ppm) against B. alexandrina were lower than those of the non-target snails. The mortalities percentage among non-target snails ranged between 0.0 & 20% when sublethal concentrations of CuSO4 against B. alexandrina mixed with those of C. micrantha and between 0.0 & 40% when mixed with A. arvensis. Mortalities ranged between 0.0 & 50% when Niclosamide was mixed with each of A. arvensis and C. micrantha. A. arvensis induced 100% mortality on Oreochromis niloticus after 48 hrs exposure and after 24 hrs for Gambusia affinis. C. micrantha was non-toxic to the fish. The survival rate of O. niloticus and G. affinis after 48 hrs exposure to 0.11 ppm of Niclosamide were 83.3% & 100% respectively. These rates were 91.7% & 93.3% respectively when each of the two fish species was exposed to 0.42 ppm of CuSO4. Mixture of sub-lethal concentrations of A. arvensis against B. alexandrina and those of Niclosamide or CuSO4 at ratios 10:40 & 25:25 induced 66.6% mortalities on O. niloticus and 83.3% at 40:10. These mixtures caused 100% mortalities on G. affinis at all ratios. A. arvensis CuSO4 mixtures at 10:40 induced 83.3% & 40% mortalities on O. niloticus and G. affinis respectively and 100% mortalities on both fish species at ratios 25:25 & 40:10. A mixture of sub-lethal concentrations of C. micrantha against B. alexandrina and of Niclosamide or CuSO4 caused mortalities of O. niloticus between 0.0 & 33.3% and between 5% & 35% of G. affinis. The residue of Cu in O. niloticus were 4.69, 19.06 & 25.37 mg/1kgm fish after 24, 48 & 72 hrs exposure to LC0 of CuSO4 against B. alexandrina respectively.",
"title": ""
},
{
"docid": "673fea40e5cb12b54cc296b1a2c98ddb",
"text": "Matrix completion is a rank minimization problem to recover a low-rank data matrix from a small subset of its entries. Since the matrix rank is nonconvex and discrete, many existing approaches approximate the matrix rank as the nuclear norm. However, the truncated nuclear norm is known to be a better approximation to the matrix rank than the nuclear norm, exploiting a priori target rank information about the problem in rank minimization. In this paper, we propose a computationally efficient truncated nuclear norm minimization algorithm for matrix completion, which we call TNNM-ALM. We reformulate the original optimization problem by introducing slack variables and considering noise in the observation. The central contribution of this paper is to solve it efficiently via the augmented Lagrange multiplier (ALM) method, where the optimization variables are updated by closed-form solutions. We apply the proposed TNNM-ALM algorithm to ghost-free high dynamic range imaging by exploiting the low-rank structure of irradiance maps from low dynamic range images. Experimental results on both synthetic and real visual data show that the proposed algorithm achieves significantly lower reconstruction errors and superior robustness against noise than the conventional approaches, while providing substantial improvement in speed, thereby applicable to a wide range of imaging applications.",
"title": ""
},
{
"docid": "7aa9a5f9bde62b5aafb30cbd9c79f9e9",
"text": "Congestion in traffic is a serious issue. In existing system signal timings are fixed and they are independent of traffic density. Large red light delays leads to traffic congestion. In this paper, IoT based traffic control system is implemented in which signal timings are updated based on the vehicle counting. This system consists of WI-FI transceiver module it transmits the vehicle count of the current system to the next traffic signal. Based on traffic density of previous signal it controls the signals of the next signal. The system is based on raspberry-pi and Arduino. Image processing of traffic video is done in MATLAB with simulink support. The whole vehicle counting is performed by raspberry pi.",
"title": ""
},
{
"docid": "7d38b4b2d07c24fdfb2306116017cd5e",
"text": "Science Technology Engineering, Art, Mathematics (STEAM) is an integration of art into Science Technology Engineering, Mathematics (STEM). Connecting art to science makes learning more effective and innovative. This study aims to determine the increase in mastery of the concept of high school students after the application of STEAM education in learning with the theme of Water and Us. The research method used is one group Pretestposttest design with students of class VII (n = 37) junior high school. The instrument used in the form of question of mastery of concepts in the form of multiple choices amounted to 20 questions and observation sheet of learning implementation. The results of the study show that there is an increase in conceptualization on the theme of Water and Us which is categorized as medium (<g>=0, 46) after the application of the STEAM approach. The conclusion obtained that by applying STEAM approach in learning can improve the mastery of concept",
"title": ""
},
{
"docid": "d29eba4f796cb642d64e73b76767e59d",
"text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.",
"title": ""
},
{
"docid": "1c03c9e9fb2697cbff3ee3063593d33c",
"text": "Hand pose estimation from a monocular RGB image is an important but challenging task. A main factor affecting its performance is the lack of a sufficiently large training dataset with accurate hand-keypoint annotations. In this work, we circumvent this problem by proposing an effective method for generating realistic hand poses, and show that state-of-the-art algorithms for hand pose estimation can be greatly improved by utilizing the generated hand poses as training data. Specifically, we first adopt an augmented reality (AR) simulator to synthesize hand poses with accurate hand-keypoint labels. Although the synthetic hand poses come with precise joint labels, eliminating the need of manual annotations, they look unnatural and are not the ideal training data. To produce more realistic hand poses, we propose to blend a synthetic hand pose with a real background, such as arms and sleeves. To this end, we develop tonality-alignment generative adversarial networks (TAGANs), which align the tonality and color distributions between synthetic hand poses and real backgrounds, and can generate high quality hand poses. We evaluate TAGAN on three benchmarks, including the RHP, STB, and CMUPS hand pose datasets. With the aid of the synthesized poses, our method performs favorably against the state-ofthe-arts in both 2D and 3D hand pose estimations.",
"title": ""
},
{
"docid": "eec15a5d14082d625824452bd070ec38",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "d3b6fcc353382c947cfb0b4a73eda0ef",
"text": "Robust object tracking is a challenging task in computer vision. To better solve the partial occlusion issue, part-based methods are widely used in visual object trackers. However, due to the complicated online training and updating process, most of these part-based trackers cannot run in real-time. Correlation filters have been used in tracking tasks recently because of the high efficiency. However, the conventional correlation filter based trackers cannot deal with occlusion. Furthermore, most correlation filter based trackers fix the scale and rotation of the target which makes the trackers unreliable in long-term tracking tasks. In this paper, we propose a novel tracking method which track objects based on parts with multiple correlation filters. Our method can run in real-time. Additionally, the Bayesian inference framework and a structural constraint mask are adopted to enable our tracker to be robust to various appearance changes. Extensive experiments have been done to prove the effectiveness of our method.",
"title": ""
},
{
"docid": "274373d46b748d92e6913496507353b1",
"text": "This paper introduces a blind watermarking based on a convolutional neural network (CNN). We propose an iterative learning framework to secure robustness of watermarking. One loop of learning process consists of the following three stages: Watermark embedding, attack simulation, and weight update. We have learned a network that can detect a 1-bit message from a image sub-block. Experimental results show that this learned network is an extension of the frequency domain that is widely used in existing watermarking scheme. The proposed scheme achieved robustness against geometric and signal processing attacks with a learning time of one day.",
"title": ""
},
{
"docid": "bca27f6e44d64824a0be41d5f2beea4d",
"text": "In Infrastructure-as-a-Service (IaaS) clouds, intrusion detection systems (IDSes) increase their importance. To securely detect attacks against virtual machines (VMs), IDS offloading with VM introspection (VMI) has been proposed. In semi-trusted clouds, however, it is difficult to securely offload IDSes because there may exist insiders such as malicious system administrators. First, secure VM execution cannot coexist with IDS offloading although it has to be enabled to prevent information leakage to insiders. Second, offloaded IDSes can be easily disabled by insiders. To solve these problems, this paper proposes IDS remote offloading with remote VMI. Since IDSes can run at trusted remote hosts outside semi-trusted clouds, they cannot be disabled by insiders in clouds. Remote VMI enables IDSes at remote hosts to introspect VMs via the trusted hypervisor inside semi-trusted clouds. Secure VM execution can be bypassed by performing VMI in the hypervisor. Remote VMI preserves the integrity and confidentiality of introspected data between the hypervisor and remote hosts. The integrity of the hypervisor can be guaranteed by various existing techniques. We have developed RemoteTrans for remotely offloading legacy IDSes and confirmed that RemoteTrans could achieve surprisingly efficient execution of legacy IDSes at remote hosts.",
"title": ""
},
{
"docid": "b31f5af2510461479d653be1ddadaa22",
"text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.",
"title": ""
},
{
"docid": "0b19bd9604fae55455799c39595c8016",
"text": "Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) λ -coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The λ-coverage problem is concerned with finding a set of key nodes having minimal size that can influence a given percentage λ of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the λ -coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient.",
"title": ""
},
{
"docid": "78a2bf1c2edec7ec9eb48f8b07dc9e04",
"text": "The performance of the most commonly used metal antennas close to the human body is one of the limiting factors of the performance of bio-sensors and wireless body area networks (WBAN). Due to the high dielectric and conductivity contrast with respect to most parts of the human body (blood, skin, …), the range of most of the wireless sensors operating in RF and microwave frequencies is limited to 1–2 cm when attached to the body. In this paper, we introduce the very novel idea of liquid antennas, that is based on engineering the properties of liquids. This approach allows for the improvement of the range by a factor of 5–10 in a very easy-to-realize way, just modifying the salinity of the aqueous solution of the antenna. A similar methodology can be extended to the development of liquid RF electronics for implantable devices and wearable real-time bio-signal monitoring, since it can potentially lead to very flexible antenna and electronic configurations.",
"title": ""
},
{
"docid": "eec819447de1d6482f9ff4a04fb73782",
"text": "Estimating the travel time of any path (denoted by a sequence of connected road segments) in a city is of great importance to traffic monitoring, route planning, ridesharing, taxi/Uber dispatching, etc. However, it is a very challenging problem, affected by diverse complex factors, including spatial correlations, temporal dependencies, external conditions (e.g. weather, traffic lights). Prior work usually focuses on estimating the travel times of individual road segments or sub-paths and then summing up these times, which leads to an inaccurate estimation because such approaches do not consider road intersections/traffic lights, and local errors may accumulate. To address these issues, we propose an end-to-end Deep learning framework for Travel Time Estimation (called DeepTTE) that estimates the travel time of the whole path directly. More specifically, we present a geo-convolution operation by integrating the geographic information into the classical convolution, capable of capturing spatial correlations. By stacking recurrent unit on the geo-convoluton layer, our DeepTTE can capture the temporal dependencies as well. A multi-task learning component is given on the top of DeepTTE, that learns to estimate the travel time of both the entire path and each local path simultaneously during the training phase. Extensive experiments on two trajectory datasets show our DeepTTE significantly outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "35830166ddf17086a61ab07ec41be6b0",
"text": "As the need for Human Computer Interaction (HCI) designers increases so does the need for courses that best prepare students for their future work life. Multidisciplinary teamwork is what very frequently meets the graduates in their new work situations. Preparing students for such multidisciplinary work through education is not easy to achieve. In this paper, we investigate ways to engage computer science students, majoring in design, use, and interaction (with technology), in design practices through an advanced graduate course in interaction design. Here, we take a closer look at how prior embodied and explicit knowledge of HCI that all of the students have, combined with understanding of design practice through the course, shape them as human-computer interaction designers. We evaluate the results of the effort in terms of increase in creativity, novelty of ideas, body language when engaged in design activities, and in terms of perceptions of how well this course prepared the students for the work practice outside of the university. Keywords—HCI education; interaction design; studio; design education; multidisciplinary teamwork.",
"title": ""
},
{
"docid": "230d3cdc0bd444bfe5c910f32bd1a109",
"text": "Programming is taught as foundation module at the beginning of undergraduate studies and/or during foundation year. Learning introductory programming languages such as Pascal, Basic / C (procedural) and C++ / Java (object oriented) requires learners to understand the underlying programming paradigm, syntax, logic and the structure. Learning to program is considered hard for novice learners and it is important to understand what makes learning program so difficult and how students learn.\n The prevailing focus on multimedia learning objects provides promising approach to create better knowledge transfer. This project aims to investigate: (a) students' perception in learning to program and the difficulties. (b) effectiveness of multimedia learning objects in learning introductory programming language in a face-to-face learning environment.",
"title": ""
},
{
"docid": "06518637c2b44779da3479854fdbb84d",
"text": "OBJECTIVE\nThe relative short-term efficacy and long-term benefits of pharmacologic versus psychotherapeutic interventions have not been studied for posttraumatic stress disorder (PTSD). This study compared the efficacy of a selective serotonin reup-take inhibitor (SSRI), fluoxetine, with a psychotherapeutic treatment, eye movement desensitization and reprocessing (EMDR), and pill placebo and measured maintenance of treatment gains at 6-month follow-up.\n\n\nMETHOD\nEighty-eight PTSD subjects diagnosed according to DSM-IV criteria were randomly assigned to EMDR, fluoxetine, or pill placebo. They received 8 weeks of treatment and were assessed by blind raters posttreatment and at 6-month follow-up. The primary outcome measure was the Clinician-Administered PTSD Scale, DSM-IV version, and the secondary outcome measure was the Beck Depression Inventory-II. The study ran from July 2000 through July 2003.\n\n\nRESULTS\nThe psychotherapy intervention was more successful than pharmacotherapy in achieving sustained reductions in PTSD and depression symptoms, but this benefit accrued primarily for adult-onset trauma survivors. At 6-month follow-up, 75.0% of adult-onset versus 33.3% of child-onset trauma subjects receiving EMDR achieved asymptomatic end-state functioning compared with none in the fluoxetine group. For most childhood-onset trauma patients, neither treatment produced complete symptom remission.\n\n\nCONCLUSIONS\nThis study supports the efficacy of brief EMDR treatment to produce substantial and sustained reduction of PTSD and depression in most victims of adult-onset trauma. It suggests a role for SSRIs as a reliable first-line intervention to achieve moderate symptom relief for adult victims of childhood-onset trauma. Future research should assess the impact of lengthier intervention, combination treatments, and treatment sequencing on the resolution of PTSD in adults with childhood-onset trauma.",
"title": ""
},
{
"docid": "0a2d2a018348f1740a086977cf19ceb4",
"text": "This paper describes the design of UART (universal asynchronous receiver transmitter) based on VHDL. As UART is consider as a low speed, low cost data exchange between computer and peripherals.[1].To overcome the problem of low speed data transmission , a 16 bit UART is proposed in this paper. It works on 9600bps baud rate. This will result in increased the speed of UART.Whole design is simulated with Xilinx ISE8.2i software and results are completely consistent with UART protocol. Keywords— Baud rate generator, HDL, ISE8.2i, Receiver, Serial communication, Transmitter, Xilinx.",
"title": ""
},
{
"docid": "b84f84961c655ea98920513bf3074241",
"text": "This study took place in Sakarya Anatolian High School, Profession High School and Vocational High School for Industry (SAPHPHVHfI) where a flexible and nonroutine organising style was tried to be realized. The management style was initiated on a study group at first, but then it helped the group to come out as natural team spontaneously. The main purpose of the study is to make an evaluation on five teams within the school where team (based) management has been experienced in accordance with Belbin (1981)’s team roles theory [9]. The study group of the research consists of 28 people. The data was obtained from observations, interviews and the answers given to the questions in Belbin Team Roles Self Perception Inventory (BTRSPI). Some of the findings of the study are; (1) There was no paralellism between the team and functional roles of the members of the mentioned five team, (2) The team roles were distributed equaly balanced but it was also found that most of the roles were played by the members who were less inclined to play it, (3) The there were very few members who played plant role within the teams and there were nearly no one who were inclined to play leader role.",
"title": ""
},
{
"docid": "19fbd4a685e7fc8c299447644f496d5f",
"text": "The creation of the e-learning web services are increasingly growing. Therefore, their discovery is a very important challenge. The choice of the e-learning web services depend, generally, on the pedagogic, the financial and the technological constraints. The Learning Quality ontology extends existing ontology such as OWL-S to provide a semantically rich description of these constraints. However, due to the diversity of web services customers, other parameters must be considered during the discovery process, such as their preferences. For this purpose, the user profile takes into account to increase the degree of relevance of discovery results. We also present a modeling scenario to illustrate how our ontology can be used.",
"title": ""
}
] | scidocsrr |
842674cf8a39f07f2abf32dd670a7ec9 | Anomalous lattice vibrations of single- and few-layer MoS2. | [
{
"docid": "9068ae05b4064a98977f6a19bae6ccf0",
"text": "We present Raman spectroscopy measurements on single- and few-layer graphene flakes. By using a scanning confocal approach, we collect spectral data with spatial resolution, which allows us to directly compare Raman images with scanning force micrographs. Single-layer graphene can be distinguished from double- and few-layer by the width of the D' line: the single peak for single-layer graphene splits into different peaks for the double-layer. These findings are explained using the double-resonant Raman model based on ab initio calculations of the electronic structure and of the phonon dispersion. We investigate the D line intensity and find no defects within the flake. A finite D line response originating from the edges can be attributed either to defects or to the breakdown of translational symmetry.",
"title": ""
}
] | [
{
"docid": "23f9be150ae62c583d34b53b509818a4",
"text": "Online social networks (OSNs) have experienced tremendous growth in recent years and become a de facto portal for hundreds of millions of Internet users. These OSNs offer attractive means for digital social interactions and information sharing, but also raise a number of security and privacy issues. While OSNs allow users to restrict access to shared data, they currently do not provide any mechanism to enforce privacy concerns over data associated with multiple users. To this end, we propose an approach to enable the protection of shared data associated with multiple users in OSNs. We formulate an access control model to capture the essence of multiparty authorization requirements, along with a multiparty policy specification scheme and a policy enforcement mechanism. Besides, we present a logical representation of our access control model that allows us to leverage the features of existing logic solvers to perform various analysis tasks on our model. We also discuss a proof-of-concept prototype of our approach as part of an application in Facebook and provide usability study and system evaluation of our method.",
"title": ""
},
{
"docid": "e409a2a23fb0dbeb0aa57c89a10d61b1",
"text": "Text is still the most prevalent Internet media type. Examples of this include popular social networking applications such as Twitter, Craigslist, Facebook, etc. Other web applications such as e-mail, blog, chat rooms, etc. are also mostly text based. A question we address in this paper that deals with text based Internet forensics is the following: given a short text document, can we identify if the author is a man or a woman? This question is motivated by recent events where people faked their gender on the Internet. Note that this is different from the authorship attribution problem. In this paper we investigate author gender identification for short length, multi-genre, content-free text, such as the ones found in many Internet applications. Fundamental questions we ask are: do men and women inherently use different classes of language styles? If this is true, what are good linguistic features that indicate gender? Based on research in human psychology, we propose 545 psycho-linguistic and gender-preferential cues along with stylometric features to build the feature space for this identification problem. Note that identifying the correct set of features that indicate gender is an open research problem. Three machine learning algorithms (support vector machine, Bayesian logistic regression and AdaBoost decision tree) are then designed for gender identification based on the proposed features. Extensive experiments on large text corpora (Reuters Corpus Volume 1 newsgroup data and Enron e-mail data) indicate an accuracy up to 85.1% in identifying the gender. Experiments also indicate that function words, word-based features and structural features are significant gender discriminators. a 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6fe77035a5101f60968a189d648e2feb",
"text": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.",
"title": ""
},
{
"docid": "1c0e441afd88f00b690900c42b40841a",
"text": "Convergence problems occur abundantly in all branches of mathematics or in the mathematical treatment of the sciences. Sequence transformations are principal tools to overcome convergence problems of the kind. They accomplish this by converting a slowly converging or diverging input sequence {sn} ∞ n=0 into another sequence {s ′ n }∞ n=0 with hopefully better numerical properties. Padé approximants, which convert the partial sums of a power series to a doubly indexed sequence of rational functions, are the best known sequence transformations, but the emphasis of the review will be on alternative sequence transformations which for some problems provide better results than Padé approximants.",
"title": ""
},
{
"docid": "76e75c4549cbaf89796355b299bedfdc",
"text": "Event cameras offer many advantages over standard frame-based cameras, such as low latency, high temporal resolution, and a high dynamic range. They respond to pixellevel brightness changes and, therefore, provide a sparse output. However, in textured scenes with rapid motion, millions of events are generated per second. Therefore, stateof-the-art event-based algorithms either require massive parallel computation (e.g., a GPU) or depart from the event-based processing paradigm. Inspired by frame-based pre-processing techniques that reduce an image to a set of features, which are typically the input to higher-level algorithms, we propose a method to reduce an event stream to a corner event stream. Our goal is twofold: extract relevant tracking information (corners do not suffer from the aperture problem) and decrease the event rate for later processing stages. Our event-based corner detector is very efficient due to its design principle, which consists of working on the Surface of Active Events (a map with the timestamp of the latest event at each pixel) using only comparison operations. Our method asynchronously processes event by event with very low latency. Our implementation is capable of processing millions of events per second on a single core (less than a micro-second per event) and reduces the event rate by a factor of 10 to 20.",
"title": ""
},
{
"docid": "095c796491edf050dc372799ae82b3d3",
"text": "Networks evolve continuously over time with the addition, deletion, and changing of links and nodes. Although many networks contain this type of temporal information, the majority of research in network representation learning has focused on static snapshots of the graph and has largely ignored the temporal dynamics of the network. In this work, we describe a general framework for incorporating temporal information into network embedding methods. The framework gives rise to methods for learning time-respecting embeddings from continuous-time dynamic networks. Overall, the experiments demonstrate the effectiveness of the proposed framework and dynamic network embedding approach as it achieves an average gain of 11.9% across all methods and graphs. The results indicate that modeling temporal dependencies in graphs is important for learning appropriate and meaningful network representations.",
"title": ""
},
{
"docid": "b139bad3a500fad18c203316fb6fbb55",
"text": "The current environment of web applications demands performance and scalability. Several previous approaches have implemented threading, events, or both, but increasing traffic requires new solutions for improved concurrent service. Node.js is a new web framework that achieves both through server-side JavaScript and eventdriven I/O. Tests will be performed against two comparable frameworks that compare service request times over a number of cores. The results will demonstrate the performance of JavaScript as a server-side language and the efficiency of the non-blocking asynchronous model.",
"title": ""
},
{
"docid": "a027c9dd3b4522cdf09a2238bfa4c37e",
"text": "Distributed word representations, or word vectors, have recently been applied to many tasks in natural language processing, leading to state-of-the-art performance. A key ingredient to the successful application of these representations is to train them on very large corpora, and use these pre-trained models in downstream tasks. In this paper, we describe how we trained such high quality word representations for 157 languages. We used two sources of data to train these models: the free online encyclopedia Wikipedia and data from the common crawl project. We also introduce three new word analogy datasets to evaluate these word vectors, for French, Hindi and Polish. Finally, we evaluate our pre-trained word vectors on 10 languages for which evaluation datasets exists, showing very strong performance compared to previous models.",
"title": ""
},
{
"docid": "c78a4446be38b8fff2a949cba30a8b65",
"text": "This paper will derive the Black-Scholes pricing model of a European option by calculating the expected value of the option. We will assume that the stock price is log-normally distributed and that the universe is riskneutral. Then, using Ito’s Lemma, we will justify the use of the risk-neutral rate in these initial calculations. Finally, we will prove put-call parity in order to price European put options, and extend the concepts of the Black-Scholes formula to value an option with pricing barriers.",
"title": ""
},
{
"docid": "d76e46eec2aa0abcbbd47b8270673efa",
"text": "OBJECTIVE\nTo explore the clinical efficacy and the mechanism of acupoint autohemotherapy in the treatment of allergic rhinitis.\n\n\nMETHODS\nForty-five cases were randomized into an autohemotherapy group (24 cases) and a western medication group (21 cases). In the autohemotherapy group, the acupoint autohemotherapy was applied to the bilateral Dingchuan (EX-B 1), Fengmen (BL 12), Feishu (BL 13), Quchi (LI 11), Zusanli (ST 36) and the others. In the western medication group, loratadine tablets were prescribed. The patients were treated continuously for 3 months in both groups. The clinical symptom score was taken for the assessment of clinical efficacy. The enzyme-linked immunoadsordent assay (ELISA) was adopted to determine the contents of serum interferon-gamma (IFN-gamma) and interleukin-12 (IL-12).\n\n\nRESULTS\nThe total effective rate was 83.3% (20/24) in the autohemotherapy group, which was obviously superior to 66.7% (14/21) in the western medication group (P < 0.05). After treatment, the clinical symptom scores of patients in the two groups were all reduced. The improvements in the scores of sneezing and clear nasal discharge in the autohemotherapy group were much more significant than those in the western medication group (both P < 0.05). After treatment, the serum IL-12 content of patients in the two groups was all increased to different extents as compared with that before treatment (both P < 0.05). In the autohemotherapy group, the serum IFN-gamma was increased after treatment (P < 0.05). In the western medication group, the serum IFN-gamma was not increased obviously after treatment (P > 0.05). The increase of the above index contents in the autohemotherapy group were more apparent than those in the western medication group (both P < 0.05).\n\n\nCONCLUSION\nThe acupoint autohemotherapy relieves significantly the clinical symptoms of allergic rhinitis and the therapeutic effect is better than that with oral administration of loratadine tablets, which is probably relevant with the increase of serum IL-12 content and the promotion of IFN-gamma synthesis.",
"title": ""
},
{
"docid": "0739c95aca9678b3c001c4d2eb92ec57",
"text": "The Image segmentation is referred to as one of the most important processes of image processing. Image segmentation is the technique of dividing or partitioning an image into parts, called segments. It is mostly useful for applications like image compression or object recognition, because for these types of applications, it is inefficient to process the whole image. So, image segmentation is used to segment the parts from image for further processing. There exist several image segmentation techniques, which partition the image into several parts based on certain image features like pixel intensity value, color, texture, etc. These all techniques are categorized based on the segmentation method used. In this paper the various image segmentation techniques are reviewed, discussed and finally a comparison of their advantages and disadvantages is listed.",
"title": ""
},
{
"docid": "92e50fc2351b4a05d573590f3ed05e81",
"text": "OBJECTIVE\nWe examined the effects of sensory-enhanced hatha yoga on symptoms of combat stress in deployed military personnel, compared their anxiety and sensory processing with that of stateside civilians, and identified any correlations between the State-Trait Anxiety Inventory scales and the Adolescent/Adult Sensory Profile quadrants.\n\n\nMETHOD\nSeventy military personnel who were deployed to Iraq participated in a randomized controlled trial. Thirty-five received 3 wk (≥9 sessions) of sensory-enhanced hatha yoga, and 35 did not receive any form of yoga.\n\n\nRESULTS\nSensory-enhanced hatha yoga was effective in reducing state and trait anxiety, despite normal pretest scores. Treatment participants showed significantly greater improvement than control participants on 16 of 18 mental health and quality-of-life factors. We found positive correlations between all test measures except sensory seeking. Sensory seeking was negatively correlated with all measures except low registration, which was insignificant.\n\n\nCONCLUSION\nThe results support using sensory-enhanced hatha yoga for proactive combat stress management.",
"title": ""
},
{
"docid": "0c842ef34f1924e899e408309f306640",
"text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.",
"title": ""
},
{
"docid": "b8702cb8d18ae53664f3dfff95152764",
"text": "Word Sense Disambiguation is a longstanding task in Natural Language Processing, lying at the core of human language understanding. However, the evaluation of automatic systems has been problematic, mainly due to the lack of a reliable evaluation framework. In this paper we develop a unified evaluation framework and analyze the performance of various Word Sense Disambiguation systems in a fair setup. The results show that supervised systems clearly outperform knowledge-based models. Among the supervised systems, a linear classifier trained on conventional local features still proves to be a hard baseline to beat. Nonetheless, recent approaches exploiting neural networks on unlabeled corpora achieve promising results, surpassing this hard baseline in most test sets.",
"title": ""
},
{
"docid": "ae956d5e1182986505ff8b4de8b23777",
"text": "Device classification is important for many applications such as industrial quality controls, through-wall imaging, and network security. A novel approach to detection is proposed using a random noise radar (RNR), coupled with Radio Frequency “Distinct Native Attribute (RF-DNA)” fingerprinting processing algorithms to non-destructively interrogate microwave devices. RF-DNA has previously demonstrated “serial number” discrimination of passive Radio Frequency (RF) emissions such as Orthogonal Frequency Division Multiplexed (OFDM) signals, Worldwide Interoperability for Microwave Access (WiMAX) signals and others with classification accuracies above 80% using a Multiple Discriminant Analysis/Maximum Likelihood (MDAML) classifier. This approach proposes to couple the classification successes of the RF-DNA fingerprint processing with a non-destructive active interrogation waveform. An Ultra Wideband (UWB) noise waveform is uniquely suitable as an active interrogation method since it will not cause damage to sensitive microwave components and multiple RNRs can operate simultaneously in close proximity, allowing for significant parallelization of detection systems.",
"title": ""
},
{
"docid": "6b1c17b9c4462aebbe7f908f4c88381b",
"text": "This study examined neural activity associated with establishing causal relationships across sentences during on-line comprehension. ERPs were measured while participants read and judged the relatedness of three-sentence scenarios in which the final sentence was highly causally related, intermediately related, and causally unrelated to its context. Lexico-semantic co-occurrence was matched across the three conditions using a Latent Semantic Analysis. Critical words in causally unrelated scenarios evoked a larger N400 than words in both highly causally related and intermediately related scenarios, regardless of whether they appeared before or at the sentence-final position. At midline sites, the N400 to intermediately related sentence-final words was attenuated to the same degree as to highly causally related words, but otherwise the N400 to intermediately related words fell in between that evoked by highly causally related and intermediately related words. No modulation of the late positivity/P600 component was observed across conditions. These results indicate that both simple and complex causal inferences can influence the earliest stages of semantically processing an incoming word. Further, they suggest that causal coherence, at the situation level, can influence incremental word-by-word discourse comprehension, even when semantic relationships between individual words are matched.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "84ba070a14da00c37de479e62e78f126",
"text": "The EEG (Electroencephalogram) signal indicates the electrical activity of the brain. They are highly random in nature and may contain useful information about the brain state. However, it is very difficult to get useful information from these signals directly in the time domain just by observing them. They are basically non-linear and nonstationary in nature. Hence, important features can be extracted for the diagnosis of different diseases using advanced signal processing techniques. In this paper the effect of different events on the EEG signal, and different signal processing methods used to extract the hidden information from the signal are discussed in detail. Linear, Frequency domain, time - frequency and non-linear techniques like correlation dimension (CD), largest Lyapunov exponent (LLE), Hurst exponent (H), different entropies, fractal dimension(FD), Higher Order Spectra (HOS), phase space plots and recurrence plots are discussed in detail using a typical normal EEG signal.",
"title": ""
},
{
"docid": "b7da2182bbdf69c46ffba20b272fab02",
"text": "Social Media is playing a key role in today's society. Many of the events that are taking place in diverse human activities could be explained by the study of these data. Big Data is a relatively new parading in Computer Science that is gaining increasing interest by the scientific community. Big Data Predictive Analytics is a Big Data discipline that is mostly used to analyze what is in the huge amounts of data and then perform predictions based on such analysis using advanced mathematics and computing techniques. The study of Social Media Data involves disciplines like Natural Language Processing, by the integration of this area to academic studies, useful findings have been achieved. Social Network Rating Systems are online platforms that allow users to know about goods and services, the way in how users review and rate their experience is a field of evolving research. This paper presents a deep investigation in the state of the art of these areas to discover and analyze the current status of the research that has been developed so far by academics of diverse background.",
"title": ""
},
{
"docid": "b467763514576e3f37755fe0e18394c7",
"text": "T study of lactic acid (HLa) and muscular contraction has a long history, beginning perhaps as early as 1807 when Berzelius found HLa in muscular fluid and thought that ‘‘the amount of free lactic acid in a muscle [was] proportional to the extent to which the muscle had previously been exercised’’ (cited in ref. 1). Several subsequent studies in the 19th century established the view that HLa was a byproduct of metabolism under conditions of O2 limitation. For example, in 1891, Araki (cited in ref. 2) reported elevated HLa levels in the blood and urine of a variety of animals subjected to hypoxia. In the early part of the 20th century, Fletcher and Hopkins (3) found an accumulation of HLa in anoxia as well as after prolonged stimulation to fatigue in amphibian muscle in vitro. Subsequently, based on the work of Fletcher and Hopkins (3) as well as his own studies, Hill (and colleagues; ref. 4) postulated that HLa increased during muscular exercise because of a lack of O2 for the energy requirements of the contracting muscles. These studies laid the groundwork for the anaerobic threshold concept, which was introduced and detailed by Wasserman and colleagues in the 1960s and early 1970s (5–7). The basic anaerobic threshold paradigm is that elevated HLa production and concentration during muscular contractions or exercise are the result of cellular hypoxia. Table 1 summarizes the essential components of the anaerobic threshold concept. However, several studies during the past '30 years have presented evidence questioning the idea that O2 limitation is a prerequisite for HLa production and accumulation in muscle and blood. Jöbsis and Stainsby (8) stimulated the canine gastrocnemius in situ at a rate known to elicit peak twitch oxygen uptake (V̇O2) and high net HLa output. They (8) reasoned that if the HLA output was caused by O2-limited oxidative phosphorylation, then there should be an accompanying reduction of members of the respiratory chain, including the NADHyNAD1 pair. Instead, muscle surface fluorometry indicated NADHyNAD1 oxidation in comparison to the resting condition. Later, Connett and colleagues (9–11), by using myoglobin cryomicrospectroscopy in small volumes of dog gracilis muscle, were unable to find loci with a PO2 less than the critical PO2 for maximal mitochondrial ox idative phosphorylation (0.1– 0.5 mmHg) during muscle contractions resulting in HLa output and an increase in muscle HLa concentration. More recently, Richardson and colleagues (12) used proton magnetic resonance spectroscopy to determine myoglobin saturation (and thereby an estimate of intramuscular PO2) during progressive exercise in humans. They found that HLa efflux was unrelated to muscle cytoplasmic PO2 during normoxia. Although there are legitimate criticisms of these studies, they and many others of a related nature have led to alternative explanations for HLa production that do not involve O2 limitation. In the present issue of PNAS, two papers (13, 14) illustrate the dichotomous relationship between lactic acid and oxygen. First, Kemper and colleagues (13) add further evidence against O2 as the key regulator of HLa production. They (13) used a unique model, the rattlesnake tailshaker muscle complex, to study intracellular glycolysis during ischemia in comparison to HLa efflux during free flow conditions; in both protocols, the muscle complex was active and producing rattling. In their first experiment, rattling was induced for 29 s during ischemia resulting from blood pressure cuff inflation between the cloaca and tailshaker muscle complex. In a second experiment, measures were taken during 108 s of rattling with normal, spontaneous blood flow. In both experiments, 31P magnetic resonance spectroscopy permitted measurement of changes in muscle levels of PCr, ATP, Pi, and pH before, during, and after rattling. Based on previous methods established in their laboratory, Kemper and colleagues (13) estimated glycolytic f lux during the ischemic and aerobic rattling protocols. The result was that total glycolytic f lux was the same under both conditions! Kemper and colleagues (13) conclude that HLa generation does not necessarily ref lect O2 limitation. To be fair, there are potential limitations to the excellent paper by Kemper and colleagues (13). First, and most importantly, they studied muscle metabolism in the transition from rest to rattling (29 s during ischemia and 108 s during free flow). Some investigators argue that oxidative phosphorylation is limited by O2 delivery to the exercising muscles during this nonsteady-state transition even with spontaneous blood flow (for review, see ref. 15). This remains a matter of debate, and the role of O2 in the transition from rest to contractions may depend on the intensity of contractions (16, 17). Of course, it is possible that the role of O2 in the transition to rattling may be tempered by the high volume density of mitochondria and the high blood supply to this unique muscle complex (13, 18). Second, there could be significant early lactate production within the first seconds of the transition (19). Third, it would have been helpful to have measurements of intramuscular lactate and glycogen concentra-",
"title": ""
}
] | scidocsrr |
d2e63cfca2fea6b2e02ea3e37e6d077a | BLACKLISTED SPEAKER IDENTIFICATION USING TRIPLET NEURAL NETWORKS | [
{
"docid": "c9ecb6ac5417b5fea04e5371e4250361",
"text": "Deep learning has proven itself as a successful set of models for learning useful semantic representations of data. These, however, are mostly implicitly learned as part of a classification task. In this paper we propose the triplet network model, which aims to learn useful representations by distance comparisons. A similar model was defined by Wang et al. (2014), tailor made for learning a ranking for image information retrieval. Here we demonstrate using various datasets that our model learns a better representation than that of its immediate competitor, the Siamese network. We also discuss future possible usage as a framework for unsupervised learning.",
"title": ""
}
] | [
{
"docid": "3d5fb6eff6d0d63c17ef69c8130d7a77",
"text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.",
"title": ""
},
{
"docid": "bea412d20a95c853fe06e7640acb9158",
"text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "779d5380c72827043111d00510e32bfd",
"text": "OBJECTIVE\nThe purpose of this review is 2-fold. The first is to provide a review for physiatrists already providing care for women with musculoskeletal pelvic floor pain and a resource for physiatrists who are interested in expanding their practice to include this patient population. The second is to describe how musculoskeletal dysfunctions involving the pelvic floor can be approached by the physiatrist using the same principles used to evaluate and treat others dysfunctions in the musculoskeletal system. This discussion clarifies that evaluation and treatment of pelvic floor pain of musculoskeletal origin is within the scope of practice for physiatrists. The authors review the anatomy of the pelvic floor, including the bony pelvis and joints, muscle and fascia, and the peripheral and autonomic nervous systems. Pertinent history and physical examination findings are described. The review concludes with a discussion of differential diagnosis and treatment of musculoskeletal pelvic floor pain in women. Improved recognition of pelvic floor dysfunction by healthcare providers will reduce impairment and disability for women with pelvic floor pain. A physiatrist is in the unique position to treat the musculoskeletal causes of this condition because it requires an expert grasp of anatomy, function, and the linked relationship between the spine and pelvis. Further research regarding musculoskeletal causes and treatment of pelvic floor pain will help validate these concepts and improve awareness and care for women limited by this condition.",
"title": ""
},
{
"docid": "337b03633afacc96b443880ad996f013",
"text": "Mobile security becomes a hot topic recently, especially in mobile payment and privacy data fields. Traditional solution can't keep a good balance between convenience and security. Against this background, a dual OS security solution named Trusted Execution Environment (TEE) is proposed and implemented by many institutions and companies. However, it raised TEE fragmentation and control problem. Addressing this issue, a mobile security infrastructure named Trusted Execution Environment Integration (TEEI) is presented to integrate multiple different TEEs. By using Trusted Virtual Machine (TVM) tech-nology, TEEI allows multiple TEEs running on one secure world on one mobile device at the same time and isolates them safely. Furthermore, a Virtual Network protocol is proposed to enable communication and cooperation among TEEs which includes TEE on TVM and TEE on SE. At last, a SOA-like Internal Trusted Service (ITS) framework is given to facilitate the development and maintenance of TEEs.",
"title": ""
},
{
"docid": "452f71b953ddffad88cec65a4c7fbece",
"text": "The password based authorization scheme for all available security systems can effortlessly be hacked by the hacker or a malicious user. One might not be able to guarantee that the person who is using the password is authentic or not. Only biometric systems are one which make offered automated authentication. There are very exceptional chances of losing the biometric identity, only if the accident of an individual may persists. Footprint based biometric system has been evaluated so far. In this paper a number of approaches of footprint recognition have been deliberated. General Terms Biometric pattern recognition, Image processing.",
"title": ""
},
{
"docid": "8183fe0c103e2ddcab5b35549ed8629f",
"text": "The performance of Douglas-Rachford splitting and the alternating direction method of multipliers (ADMM) (i.e. Douglas-Rachford splitting on the dual problem) are sensitive to conditioning of the problem data. For a restricted class of problems that enjoy a linear rate of convergence, we show in this paper how to precondition the optimization data to optimize a bound on that rate. We also generalize the preconditioning methods to problems that do not satisfy all assumptions needed to guarantee a linear convergence. The efficiency of the proposed preconditioning is confirmed in a numerical example, where improvements of more than one order of magnitude are observed compared to when no preconditioning is used.",
"title": ""
},
{
"docid": "f4aa06f7782a22eeb5f30d0ad27eaff9",
"text": "Friction effects are particularly critical for industrial robots, since they can induce large positioning errors, stick-slip motions, and limit cycles. This paper offers a reasoned overview of the main friction compensation techniques that have been developed in the last years, regrouping them according to the adopted kind of control strategy. Some experimental results are reported, to show how the control performances can be affected not only by the chosen method, but also by the characteristics of the available robotic architecture and of the executed task.",
"title": ""
},
{
"docid": "6f9ae554513bba3c685f86909e31645f",
"text": "Triboelectric energy harvesting has been applied to various fields, from large-scale power generation to small electronics. Triboelectric energy is generated when certain materials come into frictional contact, e.g., static electricity from rubbing a shoe on a carpet. In particular, textile-based triboelectric energy-harvesting technologies are one of the most promising approaches because they are not only flexible, light, and comfortable but also wearable. Most previous textile-based triboelectric generators (TEGs) generate energy by vertically pressing and rubbing something. However, we propose a corrugated textile-based triboelectric generator (CT-TEG) that can generate energy by stretching. Moreover, the CT-TEG is sewn into a corrugated structure that contains an effective air gap without additional spacers. The resulting CT-TEG can generate considerable energy from various deformations, not only by pressing and rubbing but also by stretching. The maximum output performances of the CT-TEG can reach up to 28.13 V and 2.71 μA with stretching and releasing motions. Additionally, we demonstrate the generation of sufficient energy from various activities of a human body to power about 54 LEDs. These results demonstrate the potential application of CT-TEGs for self-powered systems.",
"title": ""
},
{
"docid": "719c945e9f45371f8422648e0e81178f",
"text": "As technology in the cloud increases, there has been a lot of improvements in the maturity and firmness of cloud storage technologies. Many end-users and IT managers are getting very excited about the potential benefits of cloud storage, such as being able to store and retrieve data in the cloud and capitalizing on the promise of higher-performance, more scalable and cut-price storage. In this thesis, we present a typical Cloud Storage system architecture, a referral Cloud Storage model and Multi-Tenancy Cloud Storage model, value the past and the state-ofthe-art of Cloud Storage, and examine the Edge and problems that must be addressed to implement Cloud Storage. Use cases in diverse Cloud Storage offerings were also abridged. KEYWORDS—Cloud Storage, Cloud Computing, referral model, Multi-Tenancy, survey",
"title": ""
},
{
"docid": "5956e9399cfe817aa1ddec5553883bef",
"text": "Most existing zero-shot learning methods consider the problem as a visual semantic embedding one. Given the demonstrated capability of Generative Adversarial Networks(GANs) to generate images, we instead leverage GANs to imagine unseen categories from text descriptions and hence recognize novel classes with no examples being seen. Specifically, we propose a simple yet effective generative model that takes as input noisy text descriptions about an unseen class (e.g. Wikipedia articles) and generates synthesized visual features for this class. With added pseudo data, zero-shot learning is naturally converted to a traditional classification problem. Additionally, to preserve the inter-class discrimination of the generated features, a visual pivot regularization is proposed as an explicit supervision. Unlike previous methods using complex engineered regularizers, our approach can suppress the noise well without additional regularization. Empirically, we show that our method consistently outperforms the state of the art on the largest available benchmarks on Text-based Zero-shot Learning.",
"title": ""
},
{
"docid": "b72d0d187fe12d1f006c8e17834af60e",
"text": "Pseudoangiomatous stromal hyperplasia (PASH) is a rare benign mesenchymal proliferative lesion of the breast. In this study, we aimed to show a case of PASH with mammographic and sonographic features, which fulfill the criteria for benign lesions and to define its recently discovered elastography findings. A 49-year-old premenopausal female presented with breast pain in our outpatient surgery clinic. In ultrasound images, a hypoechoic solid mass located at the 3 o'clock position in the periareolar region of the right breast was observed. Due to it was not detected on earlier mammographies, the patient underwent a tru-cut biopsy, although the mass fulfilled the criteria for benign lesions on mammography, ultrasound, and elastography. Elastography is a new technique differentiating between benign and malignant lesions. It is also useful to determine whether a biopsy is necessary or follow-up is sufficient.",
"title": ""
},
{
"docid": "c851bad8a1f7c8526d144453b3f2aa4f",
"text": "Taxonomies of person characteristics are well developed, whereas taxonomies of psychologically important situation characteristics are underdeveloped. A working model of situation perception implies the existence of taxonomizable dimensions of psychologically meaningful, important, and consequential situation characteristics tied to situation cues, goal affordances, and behavior. Such dimensions are developed and demonstrated in a multi-method set of 6 studies. First, the \"Situational Eight DIAMONDS\" dimensions Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality (Study 1) are established from the Riverside Situational Q-Sort (Sherman, Nave, & Funder, 2010, 2012, 2013; Wagerman & Funder, 2009). Second, their rater agreement (Study 2) and associations with situation cues and goal/trait affordances (Studies 3 and 4) are examined. Finally, the usefulness of these dimensions is demonstrated by examining their predictive power of behavior (Study 5), particularly vis-à-vis measures of personality and situations (Study 6). Together, we provide extensive and compelling evidence that the DIAMONDS taxonomy is useful for organizing major dimensions of situation characteristics. We discuss the DIAMONDS taxonomy in the context of previous taxonomic approaches and sketch future research directions.",
"title": ""
},
{
"docid": "aefa4559fa6f8e0c046cd7e02d3e1b92",
"text": "The concept of smart city is considered as the new engine for economic and social growths since it is supported by the rapid development of information and communication technologies. However, each technology not only brings its advantages, but also the challenges that cities have to face in order to implement it. So, this paper addresses two research questions : « What are the most important technologies that drive the development of smart cities ?» and « what are the challenges that cities will face when adopting these technologies ? » Relying on a literature review of studies published between 1990 and 2017, the ensuing results show that Artificial Intelligence and Internet of Things represent the most used technologies for smart cities. So, the focus of this paper will be on these two technologies by showing their advantages and their challenges.",
"title": ""
},
{
"docid": "123a21d9913767e1a8d1d043f6feab01",
"text": "Permanent magnet synchronous machines generate parasitic torque pulsations owing to distortion of the stator flux linkage distribution, variable magnetic reluctance at the stator slots, and secondary phenomena. The consequences are speed oscillations which, although small in magnitude, deteriorate the performance of the drive in demanding applications. The parasitic effects are analysed and modelled using the complex state-variable approach. A fast current control system is employed to produce highfrequency electromagnetic torque components for compensation. A self-commissioning scheme is described which identifies the machine parameters, particularly the torque ripple functions which depend on the angular position of the rotor. Variations of permanent magnet flux density with temperature are compensated by on-line adaptation. The algorithms for adaptation and control are implemented in a standard microcontroller system without additional hardware. The effectiveness of the adaptive torque ripple compensation is demonstrated by experiments.",
"title": ""
},
{
"docid": "ccc4994ba255084af5456925ba6c164e",
"text": "This letter proposes a novel, small, printed monopole antenna for ultrawideband (UWB) applications with dual band-notch function. By cutting an inverted fork-shaped slit in the ground plane, additional resonance is excited, and hence much wider impedance bandwidth can be produced. To generate dual band-notch characteristics, we use a coupled inverted U-ring strip in the radiating patch. The measured results reveal that the presented dual band-notch monopole antenna offers a wide bandwidth with two notched bands, covering all the 5.2/5.8-GHz WLAN, 3.5/5.5-GHz WiMAX, and 4-GHz C-bands. The proposed antenna has a small size of 12<formula formulatype=\"inline\"><tex Notation=\"TeX\">$\\,\\times\\,$</tex> </formula>18 mm<formula formulatype=\"inline\"><tex Notation=\"TeX\">$^{2}$</tex> </formula> or about <formula formulatype=\"inline\"><tex Notation=\"TeX\">$0.15 \\lambda \\times 0.25 \\lambda$</tex></formula> at 4.2 GHz (first resonance frequency), which has a size reduction of 28% with respect to the previous similar antenna. Simulated and measured results are presented to validate the usefulness of the proposed antenna structure UWB applications.",
"title": ""
},
{
"docid": "e75ec4137b0c559a1c375d97993448b0",
"text": "In recent years, consumer-class UAVs have come into public view and cyber security starts to attract the attention of researchers and hackers. The tasks of positioning, navigation and return-to-home (RTH) of UAV heavily depend on GPS. However, the signal structure of civil GPS used by UAVs is completely open and unencrypted, and the signal received by ground devices is very weak. As a result, GPS signals are vulnerable to jamming and spoofing. The development of software define radio (SDR) has made GPS-spoofing easy and costless. GPS-spoofing may cause UAVs to be out of control or even hijacked. In this paper, we propose a novel method to detect GPS-spoofing based on monocular camera and IMU sensor of UAV. Our method was demonstrated on the UAV of DJI Phantom 4.",
"title": ""
},
{
"docid": "bd20bbe7deb2383b6253ec3f576dcf56",
"text": "Despite recent advances, the remaining bottlenecks in deep generative models are necessity of extensive training and difficulties with generalization from small number of training examples. We develop a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks. By conditioning on the additional input dataset, our model can instantly learn new concepts that were not available in the training data but conform to a similar generative process. The proposed framework does not explicitly restrict diversity of the conditioning data and also does not require an extensive inference procedure for training or adaptation. Our experiments on the Omniglot dataset demonstrate that Generative Matching Networks significantly improve predictive performance on the fly as more additional data is available and outperform existing state of the art conditional generative models.",
"title": ""
},
{
"docid": "c17e6363762e0e9683b51c0704d43fa7",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "8f957dab2aa6b186b61bc309f3f2b5c3",
"text": "Learning deeper convolutional neural networks has become a tendency in recent years. However, many empirical evidences suggest that performance improvement cannot be attained by simply stacking more layers. In this paper, we consider the issue from an information theoretical perspective, and propose a novel method Relay Backpropagation, which encourages the propagation of effective information through the network in training stage. By virtue of the method, we achieved the first place in ILSVRC 2015 Scene Classification Challenge. Extensive experiments on two challenging large scale datasets demonstrate the effectiveness of our method is not restricted to a specific dataset or network architecture.",
"title": ""
},
{
"docid": "455c080ab112cd4f71a29ab84af019f5",
"text": "We propose a novel image inpainting approach in which the exemplar and the sparse representation are combined together skillfully. In the process of image inpainting, often there will be such a situation: although the sum of squared differences (SSD) of exemplar patch is the smallest among all the candidate patches, there may be a noticeable visual discontinuity in the recovered image when using the exemplar patch to replace the target patch. In this case, we cleverly use the sparse representation of image over a redundant dictionary to recover the target patch, instead of using the exemplar patch to replace it, so that we can promptly prevent the occurrence and accumulation of errors, and obtain satisfied results. Experiments on a number of real and synthetic images demonstrate the effectiveness of proposed algorithm, and the recovered images can better meet the requirements of human vision.",
"title": ""
}
] | scidocsrr |
bb96da6f83753746b0a0a7f7b80623b1 | A computer vision assisted system for autonomous forklift vehicles in real factory environment | [
{
"docid": "dbd7b707910d2b7ba0a3c4574a01bdaa",
"text": "Visual recognition for object grasping is a well-known challenge for robot automation in industrial applications. A typical example is pallet recognition in industrial environment for pick-and-place automated process. The aim of vision and reasoning algorithms is to help robots in choosing the best pallets holes location. This work proposes an application-based approach, which ful l all requirements, dealing with every kind of occlusions and light situations possible. Even some meaning noise (or meaning misunderstanding) is considered. A pallet model, with limited degrees of freedom, is described and, starting from it, a complete approach to pallet recognition is outlined. In the model we de ne both virtual and real corners, that are geometrical object proprieties computed by different image analysis operators. Real corners are perceived by processing brightness information directly from the image, while virtual corners are inferred at a higher level of abstraction. A nal reasoning stage selects the best solution tting the model. Experimental results and performance are reported in order to demonstrate the suitability of the proposed approach.",
"title": ""
}
] | [
{
"docid": "1f02f9dae964a7e326724faa79f5ddc3",
"text": "The purpose of this review was to examine published research on small-group development done in the last ten years that would constitute an empirical test of Tuckman’s (1965) hypothesis that groups go through these stages of “forming,” “storming,” “norming,” and “performing.” Of the twenty-two studies reviewed, only one set out to directly test this hypothesis, although many of the others could be related to it. Following a review of these studies, a fifth stage, “adjourning.” was added to the hypothesis, and more empirical work was recommended.",
"title": ""
},
{
"docid": "9c3050cca4deeb2d94ae5cff883a2d68",
"text": "High speed, low latency obstacle avoidance is essential for enabling Micro Aerial Vehicles (MAVs) to function in cluttered and dynamic environments. While other systems exist that do high-level mapping and 3D path planning for obstacle avoidance, most of these systems require high-powered CPUs on-board or off-board control from a ground station. We present a novel entirely on-board approach, leveraging a light-weight low power stereo vision system on FPGA. Our approach runs at a frame rate of 60 frames a second on VGA-sized images and minimizes latency between image acquisition and performing reactive maneuvers, allowing MAVs to fly more safely and robustly in complex environments. We also suggest our system as a light-weight safety layer for systems undertaking more complex tasks, like mapping the environment. Finally, we show our algorithm implemented on a lightweight, very computationally constrained platform, and demonstrate obstacle avoidance in a variety of environments.",
"title": ""
},
{
"docid": "d43dc521d3f0f17ccd4840d6081dcbfe",
"text": "In Vehicular Ad hoc NETworks (VANETs), authentication is a crucial security service for both inter-vehicle and vehicle-roadside communications. On the other hand, vehicles have to be protected from the misuse of their private data and the attacks on their privacy, as well as to be capable of being investigated for accidents or liabilities from non-repudiation. In this paper, we investigate the authentication issues with privacy preservation and non-repudiation in VANETs. We propose a novel framework with preservation and repudiation (ACPN) for VANETs. In ACPN, we introduce the public-key cryptography (PKC) to the pseudonym generation, which ensures legitimate third parties to achieve the non-repudiation of vehicles by obtaining vehicles' real IDs. The self-generated PKCbased pseudonyms are also used as identifiers instead of vehicle IDs for the privacy-preserving authentication, while the update of the pseudonyms depends on vehicular demands. The existing ID-based signature (IBS) scheme and the ID-based online/offline signature (IBOOS) scheme are used, for the authentication between the road side units (RSUs) and vehicles, and the authentication among vehicles, respectively. Authentication, privacy preservation, non-repudiation and other objectives of ACPN have been analyzed for VANETs. Typical performance evaluation has been conducted using efficient IBS and IBOOS schemes. We show that the proposed ACPN is feasible and adequate to be used efficiently in the VANET environment.",
"title": ""
},
{
"docid": "8ccb5aeb084c9a6223dc01fa296d908e",
"text": "Effective chronic disease management is essential to improve positive health outcomes, and incentive strategies are useful in promoting self-care with longevity. Gamification, applied with mHealth (mobile health) applications, has the potential to better facilitate patient self-management. This review article addresses a knowledge gap around the effective use of gamification design principles, or mechanics, in developing mHealth applications. Badges, leaderboards, points and levels, challenges and quests, social engagement loops, and onboarding are mechanics that comprise gamification. These mechanics are defined and explained from a design and development perspective. Health and fitness applications with gamification mechanics include: bant which uses points, levels, and social engagement, mySugr which uses challenges and quests, RunKeeper which uses leaderboards as well as social engagement loops and onboarding, Fitocracy which uses badges, and Mango Health, which uses points and levels. Specific design considerations are explored, an example of the efficacy of a gamified mHealth implementation in facilitating improved self-management is provided, limitations to this work are discussed, a link between the principles of gaming and gamification in health and wellness technologies is provided, and suggestions for future work are made. We conclude that gamification could be leveraged in developing applications with the potential to better facilitate self-management in persons with chronic conditions.",
"title": ""
},
{
"docid": "00d44e09b62be682b902b01a3f3a56c2",
"text": "A novel approach is presented to efficiently render local subsurface scattering effects. We introduce an importance sampling scheme for a practical subsurface scattering model. It leads to a simple and efficient rendering algorithm, which operates in image-space, and which is even amenable for implementation on graphics hardware. We demonstrate the applicability of our technique to the problem of skin rendering, for which the subsurface transport of light typically remains local. Our implementation shows that plausible images can be rendered interactively using hardware acceleration.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "a89cd3351d6a427d18a461893949e0d7",
"text": "Touch is a powerful vehicle for communication between humans. The way we touch (how) embraces and mediates certain emotions such as anger, joy, fear, or love. While this phenomenon is well explored for human interaction, HCI research is only starting to uncover the fine granularity of sensory stimulation and responses in relation to certain emotions. Within this paper we present the findings from a study exploring the communication of emotions through a haptic system that uses tactile stimulation in mid-air. Here, haptic descriptions for specific emotions (e.g., happy, sad, excited, afraid) were created by one group of users to then be reviewed and validated by two other groups of users. We demonstrate the non-arbitrary mapping between emotions and haptic descriptions across three groups. This points to the huge potential for mediating emotions through mid-air haptics. We discuss specific design implications based on the spatial, directional, and haptic parameters of the created haptic descriptions and illustrate their design potential for HCI based on two design ideas.",
"title": ""
},
{
"docid": "03e267aeeef5c59aab348775d264afce",
"text": "Visual relations, such as person ride bike and bike next to car, offer a comprehensive scene understanding of an image, and have already shown their great utility in connecting computer vision and natural language. However, due to the challenging combinatorial complexity of modeling subject-predicate-object relation triplets, very little work has been done to localize and predict visual relations. Inspired by the recent advances in relational representation learning of knowledge bases and convolutional object detection networks, we propose a Visual Translation Embedding network (VTransE) for visual relation detection. VTransE places objects in a low-dimensional relation space where a relation can be modeled as a simple vector translation, i.e., subject + predicate ≈ object. We propose a novel feature extraction layer that enables object-relation knowledge transfer in a fully-convolutional fashion that supports training and inference in a single forward/backward pass. To the best of our knowledge, VTransE is the first end-toend relation detection network. We demonstrate the effectiveness of VTransE over other state-of-the-art methods on two large-scale datasets: Visual Relationship and Visual Genome. Note that even though VTransE is a purely visual model, it is still competitive to the Lu’s multi-modal model with language priors [27].",
"title": ""
},
{
"docid": "c678ea5e9bc8852ec80a8315a004c7f0",
"text": "Educators, researchers, and policy makers have advocated student involvement for some time as an essential aspect of meaningful learning. In the past twenty years engineering educators have implemented several means of better engaging their undergraduate students, including active and cooperative learning, learning communities, service learning, cooperative education, inquiry and problem-based learning, and team projects. This paper focuses on classroom-based pedagogies of engagement, particularly cooperative and problem-based learning. It includes a brief history, theoretical roots, research support, summary of practices, and suggestions for redesigning engineering classes and programs to include more student engagement. The paper also lays out the research ahead for advancing pedagogies aimed at more fully enhancing students’ involvement in their learning.",
"title": ""
},
{
"docid": "ec4638bad4caf17de83ac3557254c4bf",
"text": "Explaining policies of Markov Decision Processes (MDPs) is complicated due to their probabilistic and sequential nature. We present a technique to explain policies for factored MDP by populating a set of domain-independent templates. We also present a mechanism to determine a minimal set of templates that, viewed together, completely justify the policy. Our explanations can be generated automatically at run-time with no additional effort required from the MDP designer. We demonstrate our technique using the problems of advising undergraduate students in their course selection and assisting people with dementia in completing the task of handwashing. We also evaluate our explanations for courseadvising through a user study involving students.",
"title": ""
},
{
"docid": "fe3a3ffab9a98cf8f4f71c666383780c",
"text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.",
"title": ""
},
{
"docid": "369746e53baad6fef5df42935fb5c516",
"text": "SWOT analysis is an established method for assisting the formulation of strategy. An application to strategy formulation and its incorporation into the strategic development process at the University of Warwick is described. The application links SWOT analysis to resource-based planning, illustrates it as an iterative rather than a linear process and embeds it within the overall planning process. Lessons are drawn both for the University and for the strategy formulation process itself. 2003 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f35007fdca9c35b4c243cb58bd6ede7a",
"text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).",
"title": ""
},
{
"docid": "634c58784820e70145b417f51414fc96",
"text": "A considerable number of studies have been undertaken on using smart card data to analyse urban mobility. Most of these studies aim to identify recurrent passenger habits, reveal mobility patterns, reconstruct and predict passenger flows, etc. Forecasting mobility demand is a central problem for public transport authorities and operators alike. It is the first step to efficient allocation and optimisation of available resources. This paper explores an innovative approach to forecasting dynamic Origin-Destination (OD) matrices in a subway network using long Short-term Memory (LSTM) recurrent neural networks. A comparison with traditional approaches, such as calendar methodology or Vector Autoregression is conducted on a real smart card dataset issued from the public transport network of Rennes Métropole, France. The obtained results show that reliable short-term prediction (over a 15 minutes time horizon) of OD pairs can be achieved with the proposed approach. We also experiment with the effect of taking into account additional data about OD matrices of nearby transport systems (buses in this case) on the prediction accuracy.",
"title": ""
},
{
"docid": "1f27caaaeae8c82db6a677f66f2dee74",
"text": "State of the art visual SLAM systems have recently been presented which are capable of accurate, large-scale and real-time performance, but most of these require stereo vision. Important application areas in robotics and beyond open up if similar performance can be demonstrated using monocular vision, since a single camera will always be cheaper, more compact and easier to calibrate than a multi-camera rig. With high quality estimation, a single camera moving through a static scene of course effectively provides its own stereo geometry via frames distributed over time. However, a classic issue with monocular visual SLAM is that due to the purely projective nature of a single camera, motion estimates and map structure can only be recovered up to scale. Without the known inter-camera distance of a stereo rig to serve as an anchor, the scale of locally constructed map portions and the corresponding motion estimates is therefore liable to drift over time. In this paper we describe a new near real-time visual SLAM system which adopts the continuous keyframe optimisation approach of the best current stereo systems, but accounts for the additional challenges presented by monocular input. In particular, we present a new pose-graph optimisation technique which allows for the efficient correction of rotation, translation and scale drift at loop closures. Especially, we describe the Lie group of similarity transformations and its relation to the corresponding Lie algebra. We also present in detail the system’s new image processing front-end which is able accurately to track hundreds of features per frame, and a filter-based approach for feature initialisation within keyframe-based SLAM. Our approach is proven via large-scale simulation and real-world experiments where a camera completes large looped trajectories.",
"title": ""
},
{
"docid": "71c31f41d116a51786a4e8ded2c5fb87",
"text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.",
"title": ""
},
{
"docid": "176dc97bd2ce3c1fd7d3a8d6913cff70",
"text": "Packet broadcasting is a form of data communications architecture which can combine the features of packet switching with those of broadcast channels for data communication networks. Much of the basic theory of packet broadcasting has been presented as a byproduct in a sequence of papers with a distinctly practical emphasis. In this paper we provide a unified presentation of packet broadcasting theory. In Section I1 we introduce the theory of packet broadcasting data networks. In Section I11 we provide some theoretical results dealing with the performance of a packet broadcasting network when the users of the network have a variety of data rates. In Section IV we deal with packet broadcasting networks distributed in space, and in Section V we derive some properties of power-limited packet broadcasting channels,showing that the throughput of such channels can approach that of equivalent point-to-point channels.",
"title": ""
},
{
"docid": "8d350db000f7a2b1481b9cad6ce318f1",
"text": "Purpose – The purpose of this research paper is to offer a solution to differentiate supply chain planning for products with different demand features and in different life-cycle phases. Design/methodology/approach – A normative framework for selecting a planning approach was developed based on a literature review of supply chain differentiation and supply chain planning. Explorative mini-cases from three companies – Vaisala, Mattel, Inc. and Zara – were investigated to identify the features of their innovative planning solutions. The selection framework was applied to the case company’s new business unit dealing with a product portfolio of highly innovative products as well as commodity items. Findings – The need for planning differentiation is essential for companies with large product portfolios operating in volatile markets. The complexity of market, channel and supply networks makes supply chain planning more intricate. The case company provides an example of using the framework for rough segmentation to differentiate planning. Research limitations/implications – The paper widens Fisher’s supply chain selection framework to consider the aspects of planning. Practical implications – Despite substantial resources being used, planning results are often not reliable or consistent enough to ensure cost efficiency and adequate customer service. Therefore there is a need for management to critically consider current planning solutions. Originality/value – The procedure outlined in this paper is a first illustrative example of the type of processes needed to monitor and select the right planning approach.",
"title": ""
},
{
"docid": "4b013b69e174914aafc09100e182dd14",
"text": "The network of patents connected by citations is an evolving graph, which provides a representation of the innovation process. A patent citing another implies that the cited patent reflects a piece of previously existing knowledge that the citing patent builds upon. A methodology presented here (1) identifies actual clusters of patents: i.e., technological branches, and (2) gives predictions about the temporal changes of the structure of the clusters. A predictor, called the citation vector, is defined for characterizing technological development to show how a patent cited by other patents belongs to various industrial fields. The clustering technique adopted is able to detect the new emerging recombinations, and predicts emerging new technology clusters. The predictive ability of our new method is illustrated on the example of USPTO subcategory 11, Agriculture, Food, Textiles. A cluster of patents is determined based on citation data up to 1991, which shows significant overlap of the class 442 formed at the beginning of 1997. These new tools of predictive analytics could support policy decision making processes in science and technology, and help formulate recommendations for action.",
"title": ""
},
{
"docid": "ef8a61d3ff3aad461c57fe893e0b5bb6",
"text": "In this paper, we propose an underwater wireless sensor network (UWSN) named SOUNET where sensor nodes form and maintain a tree-topological network for data gathering in a self-organized manner. After network topology discovery via packet flooding, the sensor nodes consistently update their parent node to ensure the best connectivity by referring to the timevarying neighbor tables. Such a persistent and self-adaptive method leads to high network connectivity without any centralized control, even when sensor nodes are added or unexpectedly lost. Furthermore, malfunctions that frequently happen in self-organized networks such as node isolation and closed loop are resolved in a simple way. Simulation results show that SOUNET outperforms other conventional schemes in terms of network connectivity, packet delivery ratio (PDR), and energy consumption throughout the network. In addition, we performed an experiment at the Gyeongcheon Lake in Korea using commercial underwater modems to verify that SOUNET works well in a real environment.",
"title": ""
}
] | scidocsrr |
882aa388eedca0c6c969b96359cac93b | Swarm Intelligence Algorithms for Data Clustering | [
{
"docid": "3293e4e0d7dd2e29505db0af6fbb13d1",
"text": "A new heuristic approach for minimizing possibly nonlinear and non-differentiable continuous space functions is presented. By means of an extensive testbed it is demonstrated that the new method converges faster and with more certainty than many other acclaimed global optimization methods. The new method requires few control variables, is robust, easy to use, and lends itself very well to parallel computation.",
"title": ""
}
] | [
{
"docid": "bbf5561f88f31794ca95dd991c074b98",
"text": "O CTO B E R 2014 | Volume 18, Issue 4 GetMobile Every time you use a voice command on your smartphone, you are benefitting from a technique called cloud offload. Your speech is captured by a microphone, pre-processed, then sent over a wireless network to a cloud service that converts speech to text. The result is then forwarded to another cloud service or sent back to your mobile device, depending on the application. Speech recognition and many other resource-intensive mobile services require cloud offload. Otherwise, the service would be too slow and drain too much of your battery. Research projects on cloud offload are hot today, with MAUI [4] in 2010, Odessa [13] and CloneCloud [2] in 2011, and COMET [8] in 2012. These build on a rich heritage of work dating back to the mid-1990s on a theme that is broadly characterized as cyber foraging. They are also relevant to the concept of cloudlets [18] that has emerged as an important theme in mobile-cloud convergence. Reflecting my participation in this evolution from its origins, this article is a personal account of the key developments in this research area. It focuses on mobile computing, ignoring many other uses of remote execution since the 1980s such as distributed processing, query processing, distributed object systems, and distributed partitioning.",
"title": ""
},
{
"docid": "e5241f16c4bebf7c87d8dcc99ff38bc4",
"text": "Several techniques for estimating the reliability of estimated error rates and for estimating the signicance of observed dierences in error rates are explored in this paper. Textbook formulas which assume a large test set, i.e., a normal distribution, are commonly used to approximate the condence limits of error rates or as an approximate signicance test for comparing error rates. Expressions for determining more exact limits and signicance levels for small samples are given here, and criteria are also given for determining when these more exact methods should be used. The assumed normal distribution gives a poor approximation to the condence interval in most cases, but is usually useful for signicance tests when the proper mean and variance expressions are used. A commonly used 62 signicance test uses an improper expression for , which is too low and leads to a high likelihood of Type I errors. Common machine learning methods for estimating signicance from observations on a single sample may be unreliable.",
"title": ""
},
{
"docid": "8df0689ffe5c730f7a6ef6da65bec57e",
"text": "Image-based reconstruction of 3D shapes is inherently biased under the occurrence of interreflections, since the observed intensity at surface concavities consists of direct and global illumination components. This issue is commonly not considered in a Photometric Stereo (PS) framework. Under the usual assumption of only direct reflections, this corrupts the normal estimation process in concave regions and thus leads to inaccurate results. For this reason, global illumination effects need to be considered for the correct reconstruction of surfaces affected by interreflections. While there is ongoing research in the field of inverse lighting (i.e. separation of global and direct illumination components), the interreflection aspect remains oftentimes neglected in the field of 3D shape reconstruction. In this study, we present a computationally driven approach for iteratively solving that problem. Initially, we introduce a photometric stereo approach that roughly reconstructs a surface with at first unknown reflectance properties. Then, we show that the initial surface reconstruction result can be refined iteratively regarding non-distant light sources and, especially, interreflections. The benefit for the reconstruction accuracy is evaluated on real Lambertian surfaces using laser range scanner data as ground truth.",
"title": ""
},
{
"docid": "e2427ff836c8b83a75d8f7074656a025",
"text": "With the rapid growth of smartphone and tablet users, Device-to-Device (D2D) communications have become an attractive solution for enhancing the performance of traditional cellular networks. However, relevant security issues involved in D2D communications have not been addressed yet. In this paper, we investigate the security requirements and challenges for D2D communications, and present a secure and efficient key agreement protocol, which enables two mobile devices to establish a shared secret key for D2D communications without prior knowledge. Our approach is based on the Diffie-Hellman key agreement protocol and commitment schemes. Compared to previous work, our proposed protocol introduces less communication and computation overhead. We present the design details and security analysis of the proposed protocol. We also integrate our proposed protocol into the existing Wi-Fi Direct protocol, and implement it using Android smartphones.",
"title": ""
},
{
"docid": "1052a1454d421290dfdd8fdb448a50cc",
"text": "Viola and Jones [9] introduced a method to accurately and rapidly detect faces within an image. This technique can be adapted to accurately detect facial features. However, the area of the image being analyzed for a facial feature needs to be regionalized to the location with the highest probability of containing the feature. By regionalizing the detection area, false positives are eliminated and the speed of detection is increased due to the reduction of the area examined. INTRODUCTION The human face poses even more problems than other objects since the human face is a dynamic object that comes in many forms and colors [7]. However, facial detection and tracking provides many benefits. Facial recognition is not possible if the face is not isolated from the background. Human Computer Interaction (HCI) could greatly be improved by using emotion, pose, and gesture recognition, all of which require face and facial feature detection and tracking [2]. Although many different algorithms exist to perform face detection, each has its own weaknesses and strengths. Some use flesh tones, some use contours, and other are even more complex involving templates, neural networks, or filters. These algorithms suffer from the same problem; they are computationally expensive [2]. An image is only a collection of color and/or light intensity values. Analyzing these pixels for face detection is time consuming and difficult to accomplish because of the wide variations of shape and JCSC 21, 4 (April 2006) 128 Figure 1 Common Haar Features pigmentation within a human face. Pixels often require reanalysis for scaling and precision. Viola and Jones devised an algorithm, called Haar Classifiers, to rapidly detect any object, including human faces, using AdaBoost classifier cascades that are based on Haar-like features and not pixels [9]. HAAR CASCADE CLASSIFIERS The core basis for Haar classifier object detection is the Haar-like features. These features, rather than using the intensity values of a pixel, use the change in contrast values between adjacent rectangular groups of pixels. The contrast variances between the pixel groups are used to determine relative light and dark areas. Two or three adjacent groups with a relative contrast variance form a Haar-like feature. Haar-like features, as shown in Figure 1 are used to detect an image [8]. Haar features can easily be scaled by increasing or decreasing the size of the pixel group being examined. This allows features to be used to detect objects of various sizes. Integral Image The simple rectangular features of an image are c a l c u l a t e d u s i n g a n intermediate representation of an image, called the integral image [9]. The integral image is an array containing the sums of the pixels’ intensity values located directly to the left of a pixel and directly above the pixel at location (x, y) inclusive. So if A[x,y] is the original image and AI[x,y] is the integral image then the integral image is computed as shown in equation 1 and illustrated in Figure 2. (1) [ ] AI x y A x y x x y y , ( ' , ' ) ' , ' =",
"title": ""
},
{
"docid": "d21e4e55966bac19bbed84b23360b66d",
"text": "Smart growth is an approach to urban planning that provides a framework for making community development decisions. Despite its growing use, it is not known whether smart growth can impact physical activity. This review utilizes existing built environment research on factors that have been used in smart growth planning to determine whether they are associated with physical activity or body mass. Searching the MEDLINE, Psycinfo and Web-of-Knowledge databases, 204 articles were identified for descriptive review, and 44 for a more in-depth review of studies that evaluated four or more smart growth planning principles. Five smart growth factors (diverse housing types, mixed land use, housing density, compact development patterns and levels of open space) were associated with increased levels of physical activity, primarily walking. Associations with other forms of physical activity were less common. Results varied by gender and method of environmental assessment. Body mass was largely unaffected. This review suggests that several features of the built environment associated with smart growth planning may promote important forms of physical activity. Future smart growth community planning could focus more directly on health, and future research should explore whether combinations or a critical mass of smart growth features is associated with better population health outcomes.",
"title": ""
},
{
"docid": "c29a2429d6dd7bef7761daf96a29daaf",
"text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "8700c7f150c00013990c837a4bf7b655",
"text": "The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.",
"title": ""
},
{
"docid": "4d136b60209ef625c09a15e3e5abb7f7",
"text": "Alterations in the bidirectional interactions between the intestine and the nervous system have important roles in the pathogenesis of irritable bowel syndrome (IBS). A body of largely preclinical evidence suggests that the gut microbiota can modulate these interactions. A small and poorly defined role for dysbiosis in the development of IBS symptoms has been established through characterization of altered intestinal microbiota in IBS patients and reported improvement of subjective symptoms after its manipulation with prebiotics, probiotics, or antibiotics. It remains to be determined whether IBS symptoms are caused by alterations in brain signaling from the intestine to the microbiota or primary disruption of the microbiota, and whether they are involved in altered interactions between the brain and intestine during development. We review the potential mechanisms involved in the pathogenesis of IBS in different groups of patients. Studies are needed to better characterize alterations to the intestinal microbiome in large cohorts of well-phenotyped patients, and to correlate intestinal metabolites with specific abnormalities in gut-brain interactions.",
"title": ""
},
{
"docid": "96b270cf4799d041217ee3e071383ab1",
"text": "Cluster analysis has been widely used in several disciplines, such as statistics, software engineering, biology, psychology and other social sciences, in order to identify natural groups in large amounts of data. Clustering has also been widely adopted by researchers within computer science and especially the database community. K-means is the most famous clustering algorithms. In this paper, the performance of basic k means algorithm is evaluated using various distance metrics for iris dataset, wine dataset, vowel dataset, ionosphere dataset and crude oil dataset by varying no of clusters. From the result analysis we can conclude that the performance of k means algorithm is based on the distance metrics for selected database. Thus, this work will help to select suitable distance metric for particular application.",
"title": ""
},
{
"docid": "d5f8c9f7a495d9ebc5517b18ced3e784",
"text": "BACKGROUND\nFor some adolescents feeling lonely can be a protracted and painful experience. It has been suggested that engaging in health risk behaviours such as substance use and sexual behaviour may be a way of coping with the distress arising from loneliness during adolescence. However, the association between loneliness and health risk behaviour has been little studied to date. To address this research gap, the current study examined this relation among Russian and U.S. adolescents.\n\n\nMETHODS\nData were used from the Social and Health Assessment (SAHA), a school-based survey conducted in 2003. A total of 1995 Russian and 2050 U.S. students aged 13-15 years old were included in the analysis. Logistic regression was used to examine the association between loneliness and substance use, sexual risk behaviour, and violence.\n\n\nRESULTS\nAfter adjusting for demographic characteristics and depressive symptoms, loneliness was associated with a significantly increased risk of adolescent substance use in both Russia and the United States. Lonely Russian girls were significantly more likely to have used marijuana (odds ratio [OR]: 2.28; confidence interval [CI]: 1.17-4.45), while lonely Russian boys had higher odds for past 30-day smoking (OR, 1.87; CI, 1.08-3.24). In the U.S. loneliness was associated with the lifetime use of illicit drugs (excepting marijuana) among boys (OR, 3.09; CI, 1.41-6.77) and with lifetime marijuana use (OR, 1.79; CI, 1.26-2.55), past 30-day alcohol consumption (OR, 1.80; CI, 1.18-2.75) and past 30-day binge drinking (OR, 2.40; CI, 1.56-3.70) among girls. The only relation between loneliness and sexual risk behaviour was among Russian girls, where loneliness was associated with significantly higher odds for ever having been pregnant (OR, 1.69; CI: 1.12-2.54). Loneliness was not associated with violent behaviour among boys or girls in either country.\n\n\nCONCLUSION\nLoneliness is associated with adolescent health risk behaviour among boys and girls in both Russia and the United States. Further research is now needed in both settings using quantitative and qualitative methods to better understand the association between loneliness and health risk behaviours so that effective interventions can be designed and implemented to mitigate loneliness and its effects on adolescent well-being.",
"title": ""
},
{
"docid": "e2950089f76e1509ad2aa74ea5c738eb",
"text": "In this review the knowledge status of and future research options on a green gas supply based on biogas production by co-digestion is explored. Applications and developments of the (bio)gas supply in The Netherlands have been considered, whereafter literature research has been done into the several stages from production of dairy cattle manure and biomass to green gas injection into the gas grid. An overview of a green gas supply chain has not been made before. In this study it is concluded that on installation level (micro-level) much practical knowledge is available and on macro-level knowledge about availability of biomass. But on meso-level (operations level of a green gas supply) very little research has been done until now. Future research should include the modeling of a green gas supply chain on an operations level, i.e. questions must be answered as where to build digesters based on availability of biomass. Such a model should also advise on technology of upgrading depending on scale factors. Future research might also give insight in the usability of mixing (partly upgraded) biogas with natural gas. The preconditions for mixing would depend on composition of the gas, the ratio of gases to be mixed and the requirements on the mixture.",
"title": ""
},
{
"docid": "a981db3aa149caec10b1824c82840782",
"text": "It has been suggested that the performance of a team is determined by the team members’ roles. An analysis of the performance of 342 individuals organised into 33 teams indicates that team roles characterised by creativity, co-ordination and cooperation are positively correlated with team performance. Members of developed teams exhibit certain performance enhancing characteristics and behaviours. Amongst the more developed teams there is a positive relationship between Specialist Role characteristics and team performance. While the characteristics associated with the Coordinator Role are also positively correlated with performance, these can impede the performance of less developed teams.",
"title": ""
},
{
"docid": "43184dfe77050618402900bc309203d5",
"text": "A prototype of Air Gap RLSA has been designed and simulated using hybrid air gap and FR4 dielectric material. The 28% wide bandwidth has been recorded through this approach. A 12.35dBi directive gain also recorded from the simulation. The 13.3 degree beamwidth of the radiation pattern is sufficient for high directional application. Since the proposed application was for Point to Point Link, this study concluded the Air Gap RLSA is a new candidate for this application.",
"title": ""
},
{
"docid": "5fa6f8a5ee1d458ca79c18d7b9d2e6de",
"text": "Automotive radars, along with other sensors such as lidar, (which stands for \"light detection and ranging\"), ultrasound, and cameras, form the backbone of self-driving cars and advanced driver assistant systems (ADASs). These technological advancements are enabled by extremely complex systems with a long signal processing path from radars/sensors to the controller. Automotive radar systems are responsible for the detection of objects and obstacles, their position, and speed relative to the vehicle. The development of signal processing techniques along with progress in the millimeter-wave (mm-wave) semiconductor technology plays a key role in automotive radar systems. Various signal processing techniques have been developed to provide better resolution and estimation performance in all measurement dimensions: range, azimuth-elevation angles, and velocity of the targets surrounding the vehicles. This article summarizes various aspects of automotive radar signal processing techniques, including waveform design, possible radar architectures, estimation algorithms, implementation complexity-resolution trade off, and adaptive processing for complex environments, as well as unique problems associated with automotive radars such as pedestrian detection. We believe that this review article will combine the several contributions scattered in the literature to serve as a primary starting point to new researchers and to give a bird's-eye view to the existing research community.",
"title": ""
},
{
"docid": "cbaf7cd4e17c420b7546d132959b3283",
"text": "User mobility has given rise to a variety of Web applications, in which the global positioning system (GPS) plays many important roles in bridging between these applications and end users. As a kind of human behavior, transportation modes, such as walking and driving, can provide pervasive computing systems with more contextual information and enrich a user's mobility with informative knowledge. In this article, we report on an approach based on supervised learning to automatically infer users' transportation modes, including driving, walking, taking a bus and riding a bike, from raw GPS logs. Our approach consists of three parts: a change point-based segmentation method, an inference model and a graph-based post-processing algorithm. First, we propose a change point-based segmentation method to partition each GPS trajectory into separate segments of different transportation modes. Second, from each segment, we identify a set of sophisticated features, which are not affected by differing traffic conditions (e.g., a person's direction when in a car is constrained more by the road than any change in traffic conditions). Later, these features are fed to a generative inference model to classify the segments of different modes. Third, we conduct graph-based postprocessing to further improve the inference performance. This postprocessing algorithm considers both the commonsense constraints of the real world and typical user behaviors based on locations in a probabilistic manner. The advantages of our method over the related works include three aspects. (1) Our approach can effectively segment trajectories containing multiple transportation modes. (2) Our work mined the location constraints from user-generated GPS logs, while being independent of additional sensor data and map information like road networks and bus stops. (3) The model learned from the dataset of some users can be applied to infer GPS data from others. Using the GPS logs collected by 65 people over a period of 10 months, we evaluated our approach via a set of experiments. As a result, based on the change-point-based segmentation method and Decision Tree-based inference model, we achieved prediction accuracy greater than 71 percent. Further, using the graph-based post-processing algorithm, the performance attained a 4-percent enhancement.",
"title": ""
},
{
"docid": "7516f24dad8441f6e13d211047c93f36",
"text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focusses mainly on an empirical investigation of the effect of key developer factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer factors on the game development process.",
"title": ""
},
{
"docid": "d34be0ce0f9894d6e219d12630166308",
"text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.",
"title": ""
},
{
"docid": "6bc3114cc800446f4d28eb47f40adc1e",
"text": "We propose a novel computer-aided detection (CAD) framework of breast masses in mammography. To increase detection sensitivity for various types of mammographic masses, we propose the combined use of different detection algorithms. In particular, we develop a region-of-interest combination mechanism that integrates detection information gained from unsupervised and supervised detection algorithms. Also, to significantly reduce the number of false-positive (FP) detections, the new ensemble classification algorithm is developed. Extensive experiments have been conducted on a benchmark mammogram database. Results show that our combined detection approach can considerably improve the detection sensitivity with a small loss of FP rate, compared to representative detection algorithms previously developed for mammographic CAD systems. The proposed ensemble classification solution also has a dramatic impact on the reduction of FP detections; as much as 70% (from 15 to 4.5 per image) at only cost of 4.6% sensitivity loss (from 90.0% to 85.4%). Moreover, our proposed CAD method performs as well or better (70.7% and 80.0% per 1.5 and 3.5 FPs per image respectively) than the results of mammography CAD algorithms previously reported in the literature.",
"title": ""
},
{
"docid": "2372c664173be9aa8c2497b42703a80e",
"text": "Medical devices have a great impact but rigorous production and quality norms to meet, which pushes manufacturing technology to its limits in several fields, such as electronics, optics, communications, among others. This paper briefly explores how the medical industry is absorbing many of the technological developments from other industries, and making an effort to translate them into the healthcare requirements. An example is discussed in depth: implantable neural microsystems used for brain circuits mapping and modulation. Conventionally, light sources and electrical recording points are placed on silicon neural probes for optogenetic applications. The active sites of the probe must provide enough light power to modulate connectivity between neural networks, and simultaneously ensure reliable recordings of action potentials and local field activity. These devices aim at being a flexible and scalable technology capable of acquiring knowledge about neural mechanisms. Moreover, this paper presents a fabrication method for 2-D LED-based microsystems with high aspect-ratio shafts, capable of reaching up to 20 mm deep neural structures. In addition, PDMS $\\mu $ lenses on LEDs top surface are presented for focusing and increasing light intensity on target structures.",
"title": ""
}
] | scidocsrr |
49cfcea811b0d8d1823a5281c2317fb0 | Untrimmed Video Classification for Activity Detection: submission to ActivityNet Challenge | [
{
"docid": "848aae58854681e75fae293e2f8d2fc5",
"text": "Over last several decades, computer vision researchers have been devoted to find good feature to solve different tasks, such as object recognition, object detection, object segmentation, activity recognition and so forth. Ideal features transform raw pixel intensity values to a representation in which these computer vision problems are easier to solve. Recently, deep features from covolutional neural network(CNN) have attracted many researchers in computer vision. In the supervised setting, these hierarchies are trained to solve specific problems by minimizing an objective function. More recently, the feature learned from large scale image dataset have been proved to be very effective and generic for many computer vision task. The feature learned from recognition task can be used in the object detection task. This work uncover the principles that lead to these generic feature representations in the transfer learning, which does not need to train the dataset again but transfer the rich feature from CNN learned from ImageNet dataset. We begin by summarize some related prior works, particularly the paper in object recognition, object detection and segmentation. We introduce the deep feature to computer vision task in intelligent transportation system. We apply deep feature in object detection task, especially in vehicle detection task. To make fully use of objectness proposals, we apply proposal generator on road marking detection and recognition task. Third, to fully understand the transportation situation, we introduce the deep feature into scene understanding. We experiment each task for different public datasets, and prove our framework is robust.",
"title": ""
}
] | [
{
"docid": "97c3860dfb00517f744fd9504c4e7f9f",
"text": "The plastic film surface treatment load is considered as a nonlinear capacitive load, which is rather difficult for designing of an inverter. The series resonant inverter (SRI) connected to the load via transformer has been found effective for it's driving. In this paper, a surface treatment based on a pulse density modulation (PDM) and pulse frequency modulation (PFM) hybrid control scheme is described. The PDM scheme is used to regulate the output power of the inverter and the PFM scheme is used to compensate for temperature and other environmental influences on the discharge. Experimental results show that the PDM and PFM hybrid control series-resonant inverter (SRI) makes the corona discharge treatment simple and compact, thus leading to higher efficiency.",
"title": ""
},
{
"docid": "78321a0af7f5ab76809c6f7d08f2c15a",
"text": "The mass media are ranked with respect to their perceived helpfulness in satisfying clusters of needs arising from social roles and individual dispositions. For example, integration into the sociopolitical order is best served by newspaper; while \"knowing oneself \" is best served by books. Cinema and books are more helpful as means of \"escape\" than is television. Primary relations, holidays and other cultural activities are often more important than the mass media in satisfying needs. Television is the least specialized medium, serving many different personal and political needs. The \"interchangeability\" of the media over a variety of functions orders televisions, radio, newspapers, books, and cinema in a circumplex. We speculate about which attributes of the media explain the social and psychological needs they serve best. The data, drawn from an Israeli survey, are presented as a basis for cross-cultural comparison. Disciplines Communication | Social and Behavioral Sciences This journal article is available at ScholarlyCommons: http://repository.upenn.edu/asc_papers/267 ON THE USE OF THE MASS MEDIA FOR IMPORTANT THINGS * ELIHU KATZ MICHAEL GUREVITCH",
"title": ""
},
{
"docid": "c75388c19397bf1e743970cb32649b17",
"text": "In recent years, there has been a substantial amount of work on large-scale data analytics using Hadoop-based platforms running on large clusters of commodity machines. A lessexplored topic is how those data, dominated by application logs, are collected and structured to begin with. In this paper, we present Twitter’s production logging infrastructure and its evolution from application-specific logging to a unified “client events” log format, where messages are captured in common, well-formatted, flexible Thrift messages. Since most analytics tasks consider the user session as the basic unit of analysis, we pre-materialize “session sequences”, which are compact summaries that can answer a large class of common queries quickly. The development of this infrastructure has streamlined log collection and data analysis, thereby improving our ability to rapidly experiment and iterate on various aspects of the service.",
"title": ""
},
{
"docid": "086f5e6dd7889d8dcdaddec5852afbdb",
"text": "Fast advances in the wireless technology and the intensive penetration of cell phones have motivated banks to spend large budget on building mobile banking systems, but the adoption rate of mobile banking is still underused than expected. Therefore, research to enrich current knowledge about what affects individuals to use mobile banking is required. Consequently, this study employs the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate what impacts people to adopt mobile banking. Through sampling 441 respondents, this study empirically concluded that individual intention to adopt mobile banking was significantly influenced by social influence, perceived financial cost, performance expectancy, and perceived credibility, in their order of influencing strength. The behavior was considerably affected by individual intention and facilitating conditions. As for moderating effects of gender and age, this study discovered that gender significantly moderated the effects of performance expectancy and perceived financial cost on behavioral intention, and the age considerably moderated the effects of facilitating conditions and perceived self-efficacy on actual adoption behavior.",
"title": ""
},
{
"docid": "8f6d9ed651c783cf88bd6b3ab5b3012c",
"text": "To the Editor: Gianotti-Crosti syndrome (GCS) classically presents in children as a self-limited, symmetric erythematous papular eruption affecting the cheeks, extremities, and buttocks. While initial reports implicated hepatitis B virus as the etiologic agent, many other bacterial, viral, and vaccine triggers have since been described. A previously healthy 2-year-old boy presented with a 3-week history of a cutaneous eruption that initially appeared on his legs and subsequently progressed to affect his arms and face. Two weeks after onset of the eruption, he was immunized with intramuscular Vaxigrip influenza vaccination (Sanofi Pasteur), and new lesions appeared at the immunization site on his right upper arm. Physical examination demonstrated an afebrile child with erythematous papules on the cheeks, arms, and legs (Fig 1). He had a localized papular eruption on his right upper arm (Fig 2). There was no lymphadenopathy or hepatosplenomegaly. Laboratory investigations revealed leukocytosis (white cell count, 14,600/mm) with a normal differential, reactive thrombocytosis ( platelet count, 1,032,000/mm), a positive urine culture for cytomegalovirus, and positive IgM serology for Epstein-Barr virus (EBV). Histopathologic examination of a skin biopsy specimen from the right buttock revealed a perivascular and somewhat interstitial lymphocytic infiltrate in the superficial and mid-dermis with intraepidermal exocytosis of lymphocytes, mild spongiosis and papillary dermal edema. He was treated with 2.5% hydrocortisone cream, and the eruption resolved. Twelve months later, he presented with a similar papular eruption localized to the left upper arm at the site of a recent intramuscular influenza vaccination (Vaxigrip). Although an infection represents the most important etiologic agent, a second event involving immunomodulation might lead to further disease accentuation, thus explaining the association of GCS with vaccinations. In our case, there was evidence of both cytomegalovirus (CMV) and EBV infection as well as a recent history of immunization. Localized accentuation of papules at the immunization site was unusual, as previous cases of GCS following immunizations have had a widespread and typically symmetric eruption. It is possible that trauma from the injection or a component of the vaccine elicited a Koebner response, causing local accentuation. There are no previous reports of recurrence of vaccine-associated GCS. One report documented recurrence with two different infectious triggers. As GCS is a mild and selflimiting disease, further vaccinations are not contraindicated. Andrei I. Metelitsa, MD, FRCPC, and Loretta Fiorillo, MD, FRCPC",
"title": ""
},
{
"docid": "6e63abd83cc2822f011c831234c6d2e7",
"text": "The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-theart in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.",
"title": ""
},
{
"docid": "de04d3598687b34b877d744956ca4bcd",
"text": "We investigate the reputational impact of financial fraud for outside directors based on a sample of firms facing shareholder class action lawsuits. Following a financial fraud lawsuit, outside directors do not face abnormal turnover on the board of the sued firm but experience a significant decline in other board seats held. The decline in other directorships is greater for more severe cases of fraud and when the outside director bears greater responsibility for monitoring fraud. Interlocked firms that share directors with the sued firm exhibit valuation declines at the lawsuit filing. When fraud-affiliated directors depart from boards of interlocked firms, these firms experience a significant increase in valuation.",
"title": ""
},
{
"docid": "b60a4efcdd52d6209069415540016849",
"text": "Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the accuracy and improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper proposes a hybrid technique based on combining N-gram analysis and feature selection algorithms for predicting vulnerable software components where features are defined as continuous sequences of token in source code files, i.e., Java class file. Machine learning-based feature selection algorithms are then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall.",
"title": ""
},
{
"docid": "b1b511c0e014861dac12c2254f6f1790",
"text": "This paper describes automatic speech recognition (ASR) systems developed jointly by RWTH, UPB and FORTH for the 1ch, 2ch and 6ch track of the 4th CHiME Challenge. In the 2ch and 6ch tracks the final system output is obtained by a Confusion Network Combination (CNC) of multiple systems. The Acoustic Model (AM) is a deep neural network based on Bidirectional Long Short-Term Memory (BLSTM) units. The systems differ by front ends and training sets used for the acoustic training. The model for the 1ch track is trained without any preprocessing. For each front end we trained and evaluated individual acoustic models. We compare the ASR performance of different beamforming approaches: a conventional superdirective beamformer [1] and an MVDR beamformer as in [2], where the steering vector is estimated based on [3]. Furthermore we evaluated a BLSTM supported Generalized Eigenvalue beamformer using NN-GEV [4]. The back end is implemented using RWTH’s open-source toolkits RASR [5], RETURNN [6] and rwthlm [7]. We rescore lattices with a Long Short-Term Memory (LSTM) based language model. The overall best results are obtained by a system combination that includes the lattices from the system of UPB’s submission [8]. Our final submission scored second in each of the three tracks of the 4th CHiME Challenge.",
"title": ""
},
{
"docid": "b73526f1fb0abb4373421994dbd07822",
"text": "in our country around 2.78% of peoples are not able to speak (dumb). Their communications with others are only using the motion of their hands and expressions. We proposed a new technique called artificial speaking mouth for dumb people. It will be very helpful to them for conveying their thoughts to others. Some peoples are easily able to get the information from their motions. The remaining is not able to understand their way of conveying the message. In order to overcome the complexity the artificial mouth is introduced for the dumb peoples. This system is based on the motion sensor. According to dumb people, for every motion they have a meaning. That message is kept in a database. Likewise all templates are kept in the database. In the real time the template database is fed into a microcontroller and the motion sensor is fixed in their hand. For every action the motion sensors get accelerated and give the signal to the microcontroller. The microcontroller matches the motion with the database and produces the speech signal. The output of the system is using the speaker. By properly updating the database the dumb will speak like a normal person using the artificial mouth. The system also includes a text to speech conversion (TTS) block that interprets the matched gestures.",
"title": ""
},
{
"docid": "874973c7a28652d5d9859088b965e76c",
"text": "Recommender systems are commonly defined as applications that e-commerce sites exploit to suggest products and provide consumers with information to facilitate their decision-making processes.1 They implicitly assume that we can map user needs and constraints, through appropriate recommendation algorithms, and convert them into product selections using knowledge compiled into the intelligent recommender. Knowledge is extracted from either domain experts (contentor knowledge-based approaches) or extensive logs of previous purchases (collaborative-based approaches). Furthermore, the interaction process, which turns needs into products, is presented to the user with a rationale that depends on the underlying recommendation technology and algorithms. For example, if the system funnels the behavior of other users in the recommendation, it explicitly shows reviews of the selected products or quotes from a similar user. Recommender systems are now a popular research area2 and are increasingly used by e-commerce sites.1 For travel and tourism,3 the two most successful recommender system technologies (see Figure 1) are Triplehop’s TripMatcher (used by www. ski-europe.com, among others) and VacationCoach’s expert advice platform, MePrint (used by travelocity.com). Both of these recommender systems try to mimic the interactivity observed in traditional counselling sessions with travel agents when users search for advice on a possible holiday destination. From a technical viewpoint, they primarily use a content-based approach, in which the user expresses needs, benefits, and constraints using the offered language (attributes). The system then matches the user preferences with items in a catalog of destinations (described with the same language). VacationCoach exploits user profiling by explicitly asking the user to classify himself or herself in one profile (for example, as a “culture creature,” “beach bum,” or “trail trekker”), which induces implicit needs that the user doesn’t provide. The user can even input precise profile information by completing the appropriate form. TripleHop’s matching engine uses a more sophisticated approach to reduce user input. It guesses importance of attributes that the user does not explicitly mention. It then combines statistics on past user queries with a prediction computed as a weighted average of importance assigned by similar users.4",
"title": ""
},
{
"docid": "d106a47637195845ed3d218dbb766c2c",
"text": "The efficiency of three forward-pruning techniques, i.e., futility pruning, null-move pruning, and LMR, is analyzed in shogi, a Japanese chess variant. It is shown that the techniques with the a–b pruning reduce the effective branching factor of shogi endgames to 2.8 without sacrificing much accuracy of the search results. Because the average number of the raw branching factor in shogi is around 80, the pruning techniques reduce the search space more effectively than in chess. 2011 International Federation for Information Processing Published by Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "8f1a5420deb75a2b664ceeaae8fc03f9",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "b27038accdabab12d8e0869aba20a083",
"text": "Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large-scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best-performing GNN architecture.",
"title": ""
},
{
"docid": "b413cd956623afce3d50780ff90b0efe",
"text": "Parkinson's disease (PD) is the second most common neurodegenerative disorder. The majority of cases do not arise from purely genetic factors, implicating an important role of environmental factors in disease pathogenesis. Well-established environmental toxins important in PD include pesticides, herbicides, and heavy metals. However, many toxicants linked to PD and used in animal models are rarely encountered. In this context, other factors such as dietary components may represent daily exposures and have gained attention as disease modifiers. Several in vitro, in vivo, and human epidemiological studies have found a variety of dietary factors that modify PD risk. Here, we critically review findings on association between dietary factors, including vitamins, flavonoids, calorie intake, caffeine, alcohol, and metals consumed via food and fatty acids and PD. We have also discussed key data on heterocyclic amines that are produced in high-temperature cooked meat, which is a new emerging field in the assessment of dietary factors in neurological diseases. While more research is clearly needed, significant evidence exists that specific dietary factors can modify PD risk.",
"title": ""
},
{
"docid": "b07b369dc622fad777fd09b23c284e12",
"text": "Stroke is the number one cause of severe physical disability in the UK. Recent studies have shown that technologies such as virtual reality and imaging can provide an engaging and motivating tool for physical rehabilitation. In this paper we summarize previous work in our group using virtual reality technology and webcam-based games. We then present early work we are conducting in experimenting with desktop augmented reality (AR) for rehabilitation. AR allows the user to use real objects to interact with computer-generated environments. Markers attached to the real objects enable the system (via a webcam) to track the position and orientation of each object as it is moved. The system can then augment the captured image of the real environment with computer-generated graphics to present a variety of game or task-driven scenarios to the user. We discuss the development of rehabilitation prototypes using available AR libraries and express our thoughts on the potential of AR technology.",
"title": ""
},
{
"docid": "ba6709c1413a1c28c99e686e065ce564",
"text": "Essential oils are complex mixtures of hydrocarbons and their oxygenated derivatives arising from two different isoprenoid pathways. Essential oils are produced by glandular trichomes and other secretory structures, specialized secretory tissues mainly diffused onto the surface of plant organs, particularly flowers and leaves, thus exerting a pivotal ecological role in plant. In addition, essential oils have been used, since ancient times, in many different traditional healing systems all over the world, because of their biological activities. Many preclinical studies have documented antimicrobial, antioxidant, anti-inflammatory and anticancer activities of essential oils in a number of cell and animal models, also elucidating their mechanism of action and pharmacological targets, though the paucity of in human studies limits the potential of essential oils as effective and safe phytotherapeutic agents. More well-designed clinical trials are needed in order to ascertain the real efficacy and safety of these plant products.",
"title": ""
},
{
"docid": "56a4a9b20391f13e7ced38586af9743b",
"text": "The most common type of nasopharyngeal tumor is nasopharyngeal carcinoma. The etiology is multifactorial with race, genetics, environment and Epstein-Barr virus (EBV) all playing a role. While rare in Caucasian populations, it is one of the most frequent nasopharyngeal cancers in Chinese, and has endemic clusters in Alaskan Eskimos, Indians, and Aleuts. Interestingly, as native-born Chinese migrate, the incidence diminishes in successive generations, although still higher than the native population. EBV is nearly always present in NPC, indicating an oncogenic role. There are raised antibodies, higher titers of IgA in patients with bulky (large) tumors, EBERs (EBV encoded early RNAs) in nearly all tumor cells, and episomal clonal expansion (meaning the virus entered the tumor cell before clonal expansion). Consequently, the viral titer can be used to monitor therapy or possibly as a diagnostic tool in the evaluation of patients who present with a metastasis from an unknown primary. The effect of environmental carcinogens, especially those which contain a high levels of volatile nitrosamines are also important in the etiology of NPC. Chinese eat salted fish, specifically Cantonese-style salted fish, and especially during early life. Perhaps early life (weaning period) exposure is important in the ‘‘two-hit’’ hypothesis of cancer development. Smoking, cooking, and working under poor ventilation, the use of nasal oils and balms for nose and throat problems, and the use of herbal medicines have also been implicated but are in need of further verification. Likewise, chemical fumes, dusts, formaldehyde exposure, and radiation have all been implicated in this complicated disorder. Various human leukocyte antigens (HLA) are also important etiologic or prognostic indicators in NPC. While histocompatibility profiles of HLA-A2, HLA-B17 and HLA-Bw46 show increased risk for developing NPC, there is variable expression depending on whether they occur alone or jointly, further conferring a variable prognosis (B17 is associated with a poor and A2B13 with a good prognosis, respectively).",
"title": ""
},
{
"docid": "5cfc4911a59193061ab55c2ce5013272",
"text": "What can you do with a million images? In this paper, we present a new image completion algorithm powered by a huge database of photographs gathered from the Web. The algorithm patches up holes in images by finding similar image regions in the database that are not only seamless, but also semantically valid. Our chief insight is that while the space of images is effectively infinite, the space of semantically differentiable scenes is actually not that large. For many image completion tasks, we are able to find similar scenes which contain image fragments that will convincingly complete the image. Our algorithm is entirely data driven, requiring no annotations or labeling by the user. Unlike existing image completion methods, our algorithm can generate a diverse set of image completions and we allow users to select among them. We demonstrate the superiority of our algorithm over existing image completion approaches.",
"title": ""
},
{
"docid": "afae66e9ff49274bbb546cd68490e5e4",
"text": "Question-Answering Bulletin Boards (QABB), such as Yahoo! Answers and Windows Live QnA, are gaining popularity recently. Communications on QABB connect users, and the overall connections can be regarded as a social network. If the evolution of social networks can be predicted, it is quite useful for encouraging communications among users. This paper describes an improved method for predicting links based on weighted proximity measures of social networks. The method is based on an assumption that proximities between nodes can be estimated better by using both graph proximity measures and the weights of existing links in a social network. In order to show the effectiveness of our method, the data of Yahoo! Chiebukuro (Japanese Yahoo! Answers) are used for our experiments. The results show that our method outperforms previous approaches, especially when target social networks are sufficiently dense.",
"title": ""
}
] | scidocsrr |
e09d38267455f5fcd48c41bd948716c1 | Topic oriented community detection through social objects and link analysis in social networks | [
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
}
] | [
{
"docid": "210395d4f0c4db496546da0be3d2524d",
"text": "Crimes are a social irritation and cost our society deeply in several ways. Any research that can help in solving crimes quickly will pay for itself. About 10% of the criminals commit about 50% of the crimes [9]. The system is trained by feeding previous years record of crimes taken from legitimate online portal of India listing various crimes such as murder, kidnapping and abduction, dacoits, robbery, burglary, rape and other such crimes. As per data of Indian statistics, which gives data of various crime of past 14 years (2001–2014) a regression model is created and the crime rate for the following years in various states can be predicted [8]. We have used supervised, semi-supervised and unsupervised learning technique [4] on the crime records for knowledge discovery and to help in increasing the predictive accuracy of the crime. This work will be helpful to the local police stations in crime suppression.",
"title": ""
},
{
"docid": "7360c92ef44058694135338acad6838c",
"text": "Modern chip multiprocessor (CMP) systems employ multiple memory controllers to control access to main memory. The scheduling algorithm employed by these memory controllers has a significant effect on system throughput, so choosing an efficient scheduling algorithm is important. The scheduling algorithm also needs to be scalable — as the number of cores increases, the number of memory controllers shared by the cores should also increase to provide sufficient bandwidth to feed the cores. Unfortunately, previous memory scheduling algorithms are inefficient with respect to system throughput and/or are designed for a single memory controller and do not scale well to multiple memory controllers, requiring significant finegrained coordination among controllers. This paper proposes ATLAS (Adaptive per-Thread Least-Attained-Service memory scheduling), a fundamentally new memory scheduling technique that improves system throughput without requiring significant coordination among memory controllers. The key idea is to periodically order threads based on the service they have attained from the memory controllers so far, and prioritize those threads that have attained the least service over others in each period. The idea of favoring threads with least-attained-service is borrowed from the queueing theory literature, where, in the context of a single-server queue it is known that least-attained-service optimally schedules jobs, assuming a Pareto (or any decreasing hazard rate) workload distribution. After verifying that our workloads have this characteristic, we show that our implementation of least-attained-service thread prioritization reduces the time the cores spend stalling and significantly improves system throughput. Furthermore, since the periods over which we accumulate the attained service are long, the controllers coordinate very infrequently to form the ordering of threads, thereby making ATLAS scalable to many controllers. We evaluate ATLAS on a wide variety of multiprogrammed SPEC 2006 workloads and systems with 4–32 cores and 1–16 memory controllers, and compare its performance to five previously proposed scheduling algorithms. Averaged over 32 workloads on a 24-core system with 4 controllers, ATLAS improves instruction throughput by 10.8%, and system throughput by 8.4%, compared to PAR-BS, the best previous CMP memory scheduling algorithm. ATLAS's performance benefit increases as the number of cores increases.",
"title": ""
},
{
"docid": "a094fe8de029646a408bbb685824581c",
"text": "Will reading habit influence your life? Many say yes. Reading computational intelligence principles techniques and applications is a good habit; you can develop this habit to be such interesting way. Yeah, reading habit will not only make you have any favourite activity. It will be one of guidance of your life. When reading has become a habit, you will not make it as disturbing activities or as boring activity. You can gain many benefits and importances of reading.",
"title": ""
},
{
"docid": "0b0273a1e2aeb98eb4115113c8957fd2",
"text": "This paper deals with the approach of integrating a bidirectional boost-converter into the drivetrain of a (hybrid) electric vehicle in order to exploit the full potential of the electric drives and the battery. Currently, the automotive norms and standards are defined based on the characteristics of the voltage source. The current technologies of batteries for automotive applications have voltage which depends on the load and the state-of charge. The aim of this paper is to provide better system performance by stabilizing the voltage without the need of redesigning any of the current components in the system. To show the added-value of the proposed electrical topology, loss estimation is developed and proved based on actual components measurements and design. The component and its modelling is then implemented in a global system simulation environment of the electric architecture to show how it contributes enhancing the performance of the system.",
"title": ""
},
{
"docid": "d1c33990b7642ea51a8a568fa348d286",
"text": "Connectionist temporal classification CTC has recently shown improved performance and efficiency in automatic speech recognition. One popular decoding implementation is to use a CTC model to predict the phone posteriors at each frame and then perform Viterbi beam search on a modified WFST network. This is still within the traditional frame synchronous decoding framework. In this paper, the peaky posterior property of CTC is carefully investigated and it is found that ignoring blank frames will not introduce additional search errors. Based on this phenomenon, a novel phone synchronous decoding framework is proposed by removing tremendous search redundancy due to blank frames, which results in significant search speed up. The framework naturally leads to an extremely compact phone-level acoustic space representation: CTC lattice. With CTC lattice, efficient and effective modular speech recognition approaches, second pass rescoring for large vocabulary continuous speech recognition LVCSR, and phone-based keyword spotting KWS, are also proposed in this paper. Experiments showed that phone synchronous decoding can achieve 3-4 times search speed up without performance degradation compared to frame synchronous decoding. Modular LVCSR with CTC lattice can achieve further WER improvement. KWS with CTC lattice not only achieved significant equal error rate improvement, but also greatly reduced the KWS model size and increased the search speed.",
"title": ""
},
{
"docid": "8f6da9a81b4efe5e76356c6c30ddd6a6",
"text": "Recently, independent component analysis (ICA) has been widely used in the analysis of brain imaging data. An important problem with most ICA algorithms is, however, that they are stochastic; that is, their results may be somewhat different in different runs of the algorithm. Thus, the outputs of a single run of an ICA algorithm should be interpreted with some reserve, and further analysis of the algorithmic reliability of the components is needed. Moreover, as with any statistical method, the results are affected by the random sampling of the data, and some analysis of the statistical significance or reliability should be done as well. Here we present a method for assessing both the algorithmic and statistical reliability of estimated independent components. The method is based on running the ICA algorithm many times with slightly different conditions and visualizing the clustering structure of the obtained components in the signal space. In experiments with magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) data, the method was able to show that expected components are reliable; furthermore, it pointed out components whose interpretation was not obvious but whose reliability should incite the experimenter to investigate the underlying technical or physical phenomena. The method is implemented in a software package called Icasso.",
"title": ""
},
{
"docid": "8f930fc4f06f8b17e2826f0975af1fa1",
"text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.",
"title": ""
},
{
"docid": "c7162cc2e65c52d9575fe95e2c4f62f4",
"text": "The enactive approach to cognition is typically proposed as a viable alternative to traditional cognitive science. Enactive cognition displaces the explanatory focus from the internal representations of the agent to the direct sensorimotor interaction with its environment. In this paper, we investigate enactive learning through means of artificial agent simulations. We compare the performances of the enactive agent to an agent operating on classical reinforcement learning in foraging tasks within maze environments. The characteristics of the agents are analysed in terms of the accessibility of the environmental states, goals, and exploration/exploitation tradeoffs. We confirm that the enactive agent can successfully interact with its environment and learn to avoid unfavourable interactions using intrinsically defined goals. The performance of the enactive agent is shown to be limited by the number of affordable actions.",
"title": ""
},
{
"docid": "89c1ab96b509a80ff35103fa35d0a60c",
"text": "The mobile ad-hoc network (MANET) is a new wireless technology, having features like dynamic topology and self-configuring ability of nodes. The self configuring ability of nodes in MANET made it popular among the critical situation such as military use and emergency recovery. But due to open medium and broad distribution of nodes make MANET vulnerable to different attacks. So to protect MANET from various attacks, it is important to develop an efficient and secure system for MANET. Intrusion means any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource. Intrusion Prevention is the primary defense because it is the first step to make the systems secure from attacks by using passwords, biometrics etc. Even if intrusion prevention methods are used, the system may be subjected to some vulnerability. So we need a second wall of defense known as Intrusion Detection Systems (IDSs), to detect and produce responses whenever necessary. In this article we present a survey of various intrusion detection schemes available for ad hoc networks. We have also described some of the basic attacks present in ad hoc network and discussed their available solution.",
"title": ""
},
{
"docid": "124a50c2e797ffe549e1591d5720acda",
"text": "Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.",
"title": ""
},
{
"docid": "71c31f41d116a51786a4e8ded2c5fb87",
"text": "Targeting CTLA-4 represents a new type of immunotherapeutic approach, namely immune checkpoint inhibition. Blockade of CTLA-4 by ipilimumab was the first strategy to achieve a significant clinical benefit for late-stage melanoma patients in two phase 3 trials. These results fueled the notion of immunotherapy being the breakthrough strategy for oncology in 2013. Subsequently, many trials have been set up to test various immune checkpoint modulators in malignancies, not only in melanoma. In this review, recent new ideas about the mechanism of action of CTLA-4 blockade, its current and future therapeutic use, and the intensive search for biomarkers for response will be discussed. Immune checkpoint blockade, targeting CTLA-4 and/or PD-1/PD-L1, is currently the most promising systemic therapeutic approach to achieve long-lasting responses or even cure in many types of cancer, not just in patients with melanoma.",
"title": ""
},
{
"docid": "0bc53a10750de315d5a37275dd7ae4a7",
"text": "The term stigma refers to problems of knowledge (ignorance), attitudes (prejudice) and behaviour (discrimination). Most research in this area has been based on attitude surveys, media representations of mental illness and violence, has only focused upon schizophrenia, has excluded direct participation by service users, and has included few intervention studies. However, there is evidence that interventions to improve public knowledge about mental illness can be effective. The main challenge in future is to identify which interventions will produce behaviour change to reduce discrimination against people with mental illness.",
"title": ""
},
{
"docid": "2f632cc12346cb0d6aa9ce8e765acd14",
"text": "\\ Abstract: Earlier personality of person is predicted by spending lot of time with the person. As we know spending time with person is very difficult task. Referring to this problem, in the present study a method has been proposed for the behavioral prediction of a person through automated handwriting analysis. Handwriting analysis is a method to predict personality of a Person .This is done by Image Processing in MATLAB. In order to predict the personality we are going to take the writing sample and from it we are going to extract different features i.e. slant of letters and words, pen pressure, spacing between letter, spacing between word, size of letters, baseline Segmentation method is used to extract the feature of handwriting which are given to the SVM which shows the behavior of the writer sample. This gives optimum accuracy with the use of Redial Kernel function.",
"title": ""
},
{
"docid": "67ca9035e792e2c6164b87330937bb36",
"text": "In conventional full-duplex radio communication systems, the transmitter (Tx) is active at the same time as the receiver (Rx). The isolation between the Tx and the Rx is ensured by duplex filters. However, an increasing number of long-term evolution (LTE) bands crave multiband operation. Therefore, a new front-end architecture, addressing the increasing number of LTE bands, as well as multiple standards, is presented. In such an architecture, the Tx and Rx chains are separated throughout the front-end. Addition of bands is solved by making the antennas and filters tunable. Banks of duplex filters are replaced by tunable filters and antennas, providing a duplexer function over the air between the Tx and the Rx. A hardware system has been designed and fabricated to demonstrate the performance of this front-end architecture. Measurements demonstrate how the architecture addresses inter-modulation and Rx desensitization due to the Tx signal. The filters and antennas demonstrate tunability across multiple bands. System validation is detailed for LTE band I. Frequency response, as well as linearity measurements of the complete Tx and Rx front-end chains, show that the system requirements are fulfilled.",
"title": ""
},
{
"docid": "c51acd24cb864b050432a055fef2de9a",
"text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.",
"title": ""
},
{
"docid": "dfd88750bc1d42e8cc798d2097426910",
"text": "Melanoma is one of the most lethal forms of skin cancer. It occurs on the skin surface and develops from cells known as melanocytes. The same cells are also responsible for benign lesions commonly known as moles, which are visually similar to melanoma in its early stage. If melanoma is treated correctly, it is very often curable. Currently, much research is concentrated on the automated recognition of melanomas. In this paper, we propose an automated melanoma recognition system, which is based on deep learning method combined with so called hand-crafted RSurf features and Local Binary Patterns. The experimental evaluation on a large publicly available dataset demonstrates high classification accuracy, sensitivity, and specificity of our proposed approach when it is compared with other classifiers on the same dataset.",
"title": ""
},
{
"docid": "c2ac1c1f08e7e4ccba14ea203acba661",
"text": "This paper describes an approach to determine a layout for the order picking area in warehouses, such that the average travel distance for the order pickers is minimized. We give analytical formulas by which the average length of an order picking route can be calculated for two different routing policies. The optimal layout can be determined by using such formula as an objective function in a non-linear programming model. The optimal number of aisles in an order picking area appears to depend strongly on the required storage space and the pick list size.",
"title": ""
},
{
"docid": "a506f3f6c401f83eaba830abb20c8fff",
"text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.",
"title": ""
},
{
"docid": "4e41e762756c32edfb73ce144bf7ba49",
"text": "In this paper, we outline a model of semantics that integrates aspects of discourse-sensitive logics with the compositional mechanisms available from lexically-driven semantic interpretation. Specifically, we concentrate on developing a composition logic required to properly model complex types within the Generative Lexicon (henceforth GL), for which we employ SDRT principles. As we are presently interested in the composition of information to construct logical forms, we will build on one standard way of arriving at such representations, the lambda calculus, in which functional types are exploited. We outline a new type calculus that captures one of the fundamental ideas of GL: providing a set of techniques governing type shifting possibilities for various lexical items so as to allow for the combination of lexical items in cases where there is an apparent type mismatch. These techniques themselves should follow from the structure of the lexicon and its underlying logic.",
"title": ""
},
{
"docid": "78582e3594deb53149422cc41387e330",
"text": "Approximate entropy (ApEn) is a recently developed statistic quantifying regularity and complexity, which appears to have potential application to a wide variety of relatively short (greater than 100 points) and noisy time-series data. The development of ApEn was motivated by data length constraints commonly encountered, e.g., in heart rate, EEG, and endocrine hormone secretion data sets. We describe ApEn implementation and interpretation, indicating its utility to distinguish correlated stochastic processes, and composite deterministic/ stochastic models. We discuss the key technical idea that motivates ApEn, that one need not fully reconstruct an attractor to discriminate in a statistically valid manner-marginal probability distributions often suffice for this purpose. Finally, we discuss why algorithms to compute, e.g., correlation dimension and the Kolmogorov-Sinai (KS) entropy, often work well for true dynamical systems, yet sometimes operationally confound for general models, with the aid of visual representations of reconstructed dynamics for two contrasting processes. (c) 1995 American Institute of Physics.",
"title": ""
}
] | scidocsrr |
1b7342cc547f410c6e149ec7a5d69b16 | Towards Personality-driven Persuasive Health Games and Gamified Systems | [
{
"docid": "372ab07026a861acd50e7dd7c605881d",
"text": "This paper reviews peer-reviewed empirical studies on gamification. We create a framework for examining the effects of gamification by drawing from the definitions of gamification and the discussion on motivational affordances. The literature review covers results, independent variables (examined motivational affordances), dependent variables (examined psychological/behavioral outcomes from gamification), the contexts of gamification, and types of studies performed on the gamified systems. The paper examines the state of current research on the topic and points out gaps in existing literature. The review indicates that gamification provides positive effects, however, the effects are greatly dependent on the context in which the gamification is being implemented, as well as on the users using it. The findings of the review provide insight for further studies as well as for the design of gamified systems.",
"title": ""
},
{
"docid": "8777063bfba463c05e46704f0ad2c672",
"text": "Amazon's Mechanical Turk is an online labor market where requesters post jobs and workers choose which jobs to do for pay. The central purpose of this article is to demonstrate how to use this Web site for conducting behavioral research and to lower the barrier to entry for researchers who could benefit from this platform. We describe general techniques that apply to a variety of types of research and experiments across disciplines. We begin by discussing some of the advantages of doing experiments on Mechanical Turk, such as easy access to a large, stable, and diverse subject pool, the low cost of doing experiments, and faster iteration between developing theory and executing experiments. While other methods of conducting behavioral research may be comparable to or even better than Mechanical Turk on one or more of the axes outlined above, we will show that when taken as a whole Mechanical Turk can be a useful tool for many researchers. We will discuss how the behavior of workers compares with that of experts and laboratory subjects. Then we will illustrate the mechanics of putting a task on Mechanical Turk, including recruiting subjects, executing the task, and reviewing the work that was submitted. We also provide solutions to common problems that a researcher might face when executing their research on this platform, including techniques for conducting synchronous experiments, methods for ensuring high-quality work, how to keep data private, and how to maintain code security.",
"title": ""
}
] | [
{
"docid": "71da47c6837022a80dccabb0a1f5c00e",
"text": "The treatment of obesity and cardiovascular diseases is one of the most difficult and important challenges nowadays. Weight loss is frequently offered as a therapy and is aimed at improving some of the components of the metabolic syndrome. Among various diets, ketogenic diets, which are very low in carbohydrates and usually high in fats and/or proteins, have gained in popularity. Results regarding the impact of such diets on cardiovascular risk factors are controversial, both in animals and humans, but some improvements notably in obesity and type 2 diabetes have been described. Unfortunately, these effects seem to be limited in time. Moreover, these diets are not totally safe and can be associated with some adverse events. Notably, in rodents, development of nonalcoholic fatty liver disease (NAFLD) and insulin resistance have been described. The aim of this review is to discuss the role of ketogenic diets on different cardiovascular risk factors in both animals and humans based on available evidence.",
"title": ""
},
{
"docid": "16a5313b414be4ae740677597291d580",
"text": "We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Objects in the 2D images in our database are aligned with the 3D shapes, and the alignment provides both accurate 3D pose annotation and the closest 3D shape annotation for each 2D object. Consequently, our database is useful for recognizing the 3D pose and 3D shape of objects from 2D images. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. Our database is available online at http://cvgl.stanford.edu/projects/objectnet3d.",
"title": ""
},
{
"docid": "81387b0f93b68e8bd6a56a4fd81477e9",
"text": "We analyze microblog posts generated during two recent, concurrent emergency events in North America via Twitter, a popular microblogging service. We focus on communications broadcast by people who were \"on the ground\" during the Oklahoma Grassfires of April 2009 and the Red River Floods that occurred in March and April 2009, and identify information that may contribute to enhancing situational awareness (SA). This work aims to inform next steps for extracting useful, relevant information during emergencies using information extraction (IE) techniques.",
"title": ""
},
{
"docid": "47b9d5585a0ca7d10cb0fd9da673dd0f",
"text": "A novel deep architecture, the tensor deep stacking network (T-DSN), is presented. The T-DSN consists of multiple, stacked blocks, where each block contains a bilinear mapping from two hidden layers to the output layer, using a weight tensor to incorporate higher order statistics of the hidden binary (([0,1])) features. A learning algorithm for the T-DSN's weight matrices and tensors is developed and described in which the main parameter estimation burden is shifted to a convex subproblem with a closed-form solution. Using an efficient and scalable parallel implementation for CPU clusters, we train sets of T-DSNs in three popular tasks in increasing order of the data size: handwritten digit recognition using MNIST (60k), isolated state/phone classification and continuous phone recognition using TIMIT (1.1 m), and isolated phone classification using WSJ0 (5.2 m). Experimental results in all three tasks demonstrate the effectiveness of the T-DSN and the associated learning methods in a consistent manner. In particular, a sufficient depth of the T-DSN, a symmetry in the two hidden layers structure in each T-DSN block, our model parameter learning algorithm, and a softmax layer on top of T-DSN are shown to have all contributed to the low error rates observed in the experiments for all three tasks.",
"title": ""
},
{
"docid": "1a26a00f0915e2eac01edf8cad0152c9",
"text": "This paper describes the application of Rao-Blackwellised Gibbs sampling (RBGS) to speech recognition using switching linear dynamical systems (SLDSs) as the acoustic model. The SLDS is a hybrid of standard hidden Markov models (HMMs) and linear dynamical systems. It is an extension of the stochastic segment model (SSM) where segments are assumed independent. SLDSs explicitly take into account the strong co-articulation present in speech using a Gauss-Markov process in a low dimensional, latent, state space. Unfortunately , inference in SLDS is intractable unless the discrete state sequence is known. RBGS is one approach that may be applied for both improved training and decoding for this form of intractable model. The theory of SLDS and RBGS is described, along with an efficient proposal distribution. The performance of the SLDS and SSM using RBGS for training and inference is evaluated on the ARPA Resource Management task.",
"title": ""
},
{
"docid": "da4ec6dcf7f47b8ec0261195db7af5ca",
"text": "Smart factories are on the verge of becoming the new industrial paradigm, wherein optimization permeates all aspects of production, from concept generation to sales. To fully pursue this paradigm, flexibility in the production means as well as in their timely organization is of paramount importance. AI is planning a major role in this transition, but the scenarios encountered in practice might be challenging for current tools. Task planning is one example where AI enables more efficient and flexible operation through an online automated adaptation and rescheduling of the activities to cope with new operational constraints and demands. In this paper we present SMarTplan, a task planner specifically conceived to deal with real-world scenarios in the emerging smart factory paradigm. Including both special-purpose and general-purpose algorithms, SMarTplan is based on current automated reasoning technology and it is designed to tackle complex application domains. In particular, we show its effectiveness on a logistic scenario, by comparing its specialized version with the general purpose one, and extending the comparison to other state-of-the-art task planners.",
"title": ""
},
{
"docid": "1ca4fbc998c41cec99abe68c5ebe944e",
"text": "Wheeled mobile robots are increasingly being utilized in unknown and dangerous situations such as planetary surface exploration. Based on force analysis of the differential joints and force analysis between the wheels and the ground, this paper established the quasi-static mathematical model of the 6-wheel mobile system of planetary exploration rover with rocker-bogie structure. Considering the constraint conditions, with the method of finding the wheels’friction force solution space feasible region, obstacle-climbing capability of the mobile mechanism was analyzed. Given the same obstacle heights and contact angles of wheel-ground, the single side forward obstacle-climbing of the wheels was simulated respectively, and the results show that the rear wheel has the best obstacle-climbing capability, the middle wheel is the worst, and the front wheel is moderate.",
"title": ""
},
{
"docid": "9b254da42083948029120552ede69652",
"text": "Smart contracts are computer programs that can be consistently executed by a network of mutually distrusting nodes, without the arbitration of a trusted authority. Because of their resilience to tampering , smart contracts are appealing in many scenarios, especially in those which require transfers of money to respect certain agreed rules (like in financial services and in games). Over the last few years many platforms for smart contracts have been proposed, and some of them have been actually implemented and used. We study how the notion of smart contract is interpreted in some of these platforms. Focussing on the two most widespread ones, Bitcoin and Ethereum, we quantify the usage of smart contracts in relation to their application domain. We also analyse the most common programming patterns in Ethereum, where the source code of smart contracts is available.",
"title": ""
},
{
"docid": "a41444799f295e5fc325626fd663d77d",
"text": "Lexicon-based approaches to Twitter sentiment analysis are gaining much popularity due to their simplicity, domain independence, and relatively good performance. These approaches rely on sentiment lexicons, where a collection of words are marked with fixed sentiment polarities. However, words’ sentiment orientation (positive, neural, negative) and/or sentiment strengths could change depending on context and targeted entities. In this paper we present SentiCircle; a novel lexicon-based approach that takes into account the contextual and conceptual semantics of words when calculating their sentiment orientation and strength in Twitter. We evaluate our approach on three Twitter datasets using three different sentiment lexicons. Results show that our approach significantly outperforms two lexicon baselines. Results are competitive but inconclusive when comparing to state-of-art SentiStrength, and vary from one dataset to another. SentiCircle outperforms SentiStrength in accuracy on average, but falls marginally behind in F-measure.",
"title": ""
},
{
"docid": "b4e56855d6f41c5829b441a7d2765276",
"text": "College student attendance management of class plays an important position in the work of management of college student, this can help to urge student to class on time, improve learning efficiency, increase learning grade, and thus entirely improve the education level of the school. Therefore, colleges need an information system platform of check attendance management of class strongly to enhance check attendance management of class using the information technology which gathers the basic information of student automatically. According to current reality and specific needs of check attendance and management system of college students and the exist device of the system. Combined with the study of college attendance system, this paper gave the node design of check attendance system of class which based on RFID on the basic of characteristics of embedded ARM and RFID technology.",
"title": ""
},
{
"docid": "88ffb30f1506bedaf7c1a3f43aca439e",
"text": "The multiprotein mTORC1 protein kinase complex is the central component of a pathway that promotes growth in response to insulin, energy levels, and amino acids and is deregulated in common cancers. We find that the Rag proteins--a family of four related small guanosine triphosphatases (GTPases)--interact with mTORC1 in an amino acid-sensitive manner and are necessary for the activation of the mTORC1 pathway by amino acids. A Rag mutant that is constitutively bound to guanosine triphosphate interacted strongly with mTORC1, and its expression within cells made the mTORC1 pathway resistant to amino acid deprivation. Conversely, expression of a guanosine diphosphate-bound Rag mutant prevented stimulation of mTORC1 by amino acids. The Rag proteins do not directly stimulate the kinase activity of mTORC1, but, like amino acids, promote the intracellular localization of mTOR to a compartment that also contains its activator Rheb.",
"title": ""
},
{
"docid": "ade88f8a9aa8a47dd2dc5153b3584695",
"text": "A software environment is described which provides facilities at a variety of levels for “animating” algorithms: exposing properties of programs by displaying multiple dynamic views of the program and associated data structures. The system is operational on a network of graphics-based, personal workstations and has been used successfully in several applications for teaching and research in computer science and mathematics. In this paper, we outline the conceptual framework that we have developed for animating algorithms, describe the system that we have implemented, and give several examples drawn from the host of algorithms that we have animated.",
"title": ""
},
{
"docid": "426a7c1572e9d68f4ed2429f143387d5",
"text": "Face tracking is an active area of computer vision research and an important building block for many applications. However, opposed to face detection, there is no common benchmark data set to evaluate a tracker’s performance, making it hard to compare results between different approaches. In this challenge we propose a data set, annotation guidelines and a well defined evaluation protocol in order to facilitate the evaluation of face tracking systems in the future.",
"title": ""
},
{
"docid": "5e9dce428a2bcb6f7bc0074d9fe5162c",
"text": "This paper describes a real-time motion planning algorithm, based on the rapidly-exploring random tree (RRT) approach, applicable to autonomous vehicles operating in an urban environment. Extensions to the standard RRT are predominantly motivated by: 1) the need to generate dynamically feasible plans in real-time; 2) safety requirements; 3) the constraints dictated by the uncertain operating (urban) environment. The primary novelty is in the use of closed-loop prediction in the framework of RRT. The proposed algorithm was at the core of the planning and control software for Team MIT's entry for the 2007 DARPA Urban Challenge, where the vehicle demonstrated the ability to complete a 60 mile simulated military supply mission, while safely interacting with other autonomous and human driven vehicles.",
"title": ""
},
{
"docid": "1a02d963590683c724a814f341f94f92",
"text": "The concept of the quality attribute scenario was introduced in 2003 to support the development of software architectures. This concept is useful because it provides an operational means to represent the quality requirements of a system. It also provides a more concrete basis with which to teach software architecture. Teaching this concept however has some unexpected issues. In this paper, I present my experiences of teaching quality attribute scenarios and outline Bus Tracker, a case study I have developed to support my teaching.",
"title": ""
},
{
"docid": "5935224c53222d0234adffddae23eb04",
"text": "The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s.",
"title": ""
},
{
"docid": "6646b66370ed02eb84661c8505eb7563",
"text": "Re-identification is generally carried out by encoding the appearance of a subject in terms of outfit, suggesting scenarios where people do not change their attire. In this paper we overcome this restriction, by proposing a framework based on a deep convolutional neural network, SOMAnet, that additionally models other discriminative aspects, namely, structural attributes of the human figure (e.g. height, obesity, gender). Our method is unique in many respects. First, SOMAnet is based on the Inception architecture, departing from the usual siamese framework. This spares expensive data preparation (pairing images across cameras) and allows the understanding of what the network learned. Second, and most notably, the training data consists of a synthetic 100K instance dataset, SOMAset, created by photorealistic human body generation software. Synthetic data represents a good compromise between realistic imagery, usually not required in re-identification since surveillance cameras capture low-resolution silhouettes, and complete control of the samples, which is useful in order to customize the data w.r.t. the surveillance scenario at-hand, e.g. ethnicity. SOMAnet, trained on SOMAset and fine-tuned on recent re-identification benchmarks, outperforms all competitors, matching subjects even with different apparel. The combination of synthetic data with Inception architectures opens up new research avenues in re-identification.",
"title": ""
},
{
"docid": "fc1009e9515d83166e97e4e01ae9ca69",
"text": "In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD) and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset (CGD) that has a total of more than 50000 gestures for the \"one-shot-learning\" competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences. Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for \"user independent\" gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented.",
"title": ""
},
{
"docid": "906659aa61bbdb5e904a1749552c4741",
"text": "The Rete–Match algorithm is a matching algorithm used to develop production systems. Although this algorithm is the fastest known algorithm, for many patterns and many objects matching, it still suffers from considerable amount of time needed due to the recursive nature of the problem. In this paper, a parallel version of the Rete–Match algorithm for distributed memory architecture is presented. Also, a theoretical analysis to its correctness and performance is discussed. q 1998 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "0ea07af19fc199f6a9909bd7df0576a1",
"text": "Detection of overlapping communities in complex networks has motivated recent research in the relevant fields. Aiming this problem, we propose a Markov dynamics based algorithm, called UEOC, which means, “unfold and extract overlapping communities”. In UEOC, when identifying each natural community that overlaps, a Markov random walk method combined with a constraint strategy, which is based on the corresponding annealed network (degree conserving random network), is performed to unfold the community. Then, a cutoff criterion with the aid of a local community function, called conductance, which can be thought of as the ratio between the number of edges inside the community and those leaving it, is presented to extract this emerged community from the entire network. The UEOC algorithm depends on only one parameter whose value can be easily set, and it requires no prior knowledge on the hidden community structures. The proposed UEOC has been evaluated both on synthetic benchmarks and on some real-world networks, and was compared with a set of competing algorithms. Experimental result has shown that UEOC is highly effective and efficient for discovering overlapping communities.",
"title": ""
}
] | scidocsrr |
f8965c62a7b6fbba3e11d13a94a648c5 | Establishing moderators and biosignatures of antidepressant response in clinical care (EMBARC): Rationale and design. | [
{
"docid": "469d83dd9996ca27217907362f44304c",
"text": "Although cells in many brain regions respond to reward, the cortical-basal ganglia circuit is at the heart of the reward system. The key structures in this network are the anterior cingulate cortex, the orbital prefrontal cortex, the ventral striatum, the ventral pallidum, and the midbrain dopamine neurons. In addition, other structures, including the dorsal prefrontal cortex, amygdala, hippocampus, thalamus, and lateral habenular nucleus, and specific brainstem structures such as the pedunculopontine nucleus, and the raphe nucleus, are key components in regulating the reward circuit. Connectivity between these areas forms a complex neural network that mediates different aspects of reward processing. Advances in neuroimaging techniques allow better spatial and temporal resolution. These studies now demonstrate that human functional and structural imaging results map increasingly close to primate anatomy.",
"title": ""
}
] | [
{
"docid": "39e550b269a66f31d467269c6389cde0",
"text": "The artificial intelligence community has seen a recent resurgence in the area of neural network study. Inspired by the workings of the brain and nervous system, neural networks have solved some persistent problems in vision and speech processing. However, the new systems may offer an alternative approach to decision-making via high level pattern recognition. This paper will describe the distinguishing features of neurally inspired systems, and present popular systems in a discrete-time, algorithmic framework. Examples of applications to decision problems will appear, and guidelines for their use in operations research will be established.",
"title": ""
},
{
"docid": "7f1eb105b7a435993767e4a4b40f7ed9",
"text": "In the last two decades, organizations have recognized, indeed fixated upon, the impOrtance of quality and quality management One manifestation of this is the emergence of the total quality management (TQM) movement, which has been proclaimed as the latest and optimal way of managing organizations. Likewise, in the domain of human resource management, the concept of quality of work life (QWL) has also received much attention of late from theoreticians, researchers, and practitioners. However, little has been done to build a bridge between these two increasingly important concepts, QWL and TQM. The purpose of this research is to empirically examine the relationship between quality of work life (the internalized attitudes employees' have about their jobs) and an indicatorofTQM, customer service attitudes, CSA (the externalized signals employees' send to customers about their jobs). In addition, this study examines how job involvement and organizational commitment mediate the relationship between QWL and CSA. OWL and <:sA HlU.3 doc JJ a9t94 page 3 INTRODUCTION Quality and quality management have become increasingly important topics for both practitioners and researchers (Anderson, Rungtusanatham, & Schroeder, 1994). Among the many quality related activities that have arisen, the principle of total quality mana~ement (TQM) has been advanced as the optimal approach for managing people and processes. Indeed, it is considered by some to be the key to ensuring the long-term viability of organizations (Feigenbaum, 1982). Ofcourse, niany companies have invested heavily in total quality efforts in the form of capital expenditures on plant and equipment, and through various human resource management programs designed to spread the quality gospel. However, many still argue that there is insufficient theoretical development and empirical eviden~e for the determinants and consequences of quality management initiatives (Dean & Bowen, 1994). Mter reviewing the relevant research literatures, we find that three problems persist in the research on TQM. First, a definition of quality has not been agreed upon. Even more problematic is the fact that many of the definitions that do exist are continuously evolving. Not smprisingly, these variable definitions often lead to inconsistent and even conflicting conclusions, Second, very few studies have systematically examined these factors that influence: the quality of goods and services, the implementation of quality activities, or the performance of organizations subsequent to undertaking quality initiatives (Spencer, 1994). Certainly this has been true for quality-related human resource management interventions. Last, TQM has suffered from an \"implementation problem\" (Reger, Gustafson, Demarie, & Mullane, 1994, p. 565) which has prevented it from transitioning from the theoretical to the applied. In the domain of human resource management, quality of working life (QWL) has also received a fair amount of attention of late from theorists, researchers, and practitioners. The underlying, and mostimportant, principles of QWL capture an employee's satisfaction with and feelings about their: work, work environment, and organization. Most who study QWL, and TQM for that matter, tend to focus on the importance of employee systems and organizational performance, whereas researchers in the field ofHRM OWLmdCSA HlU.3doc 1J1l2f}4 pBgc4 usually emphasize individual attitudes and individual performance (Walden, 1994). Fmthennore, as Walden (1994) alludes to, there are significantly different managerial prescriptions and applied levels for routine human resource management processes, such as selection, performance appraisal, and compensation, than there are for TQM-driven processes, like teamwork, participative management, and shared decision-making (Deming, 1986, 1993; Juran, 1989; M. Walton, 1986; Dean & Bowen, 1994). To reiterate, these variations are attributable to the difference between a mico focus on employees as opposed to a more macrofocus on employee systems. These specific differences are but a few of the instances where the views of TQM and the views of traditional HRM are not aligned (Cardy & Dobbins, 1993). In summary, although TQM is a ubiquitous organizational phenomenon; it has been given little research attention, especially in the form ofempirical studies. Therefore, the goal of this study is to provide an empirical assessment of how one, internalized, indicator ofHRM effectiveness, QWL, is associated with one, externalized, indicator of TQM, customer service attitudes, CSA. In doing so, it bridges the gap between \"employee-focused\" H.RM outcoines and \"customer-focused\" TQM consequences. In addition, it examines the mediating effects of organizational commitment and job involvement on this relationship. QUALITY OF WORK LIFE AND CUSTOMER SERVICE AITITUDES In this section, we introduce and review the main principles of customer service attitudes, CSA, and discuss its measurement Thereafter, our extended conceptualization and measurement of QWL will be presented. Fmally, two variables hypothesized to function as mediators of the relationship between CSA and QWL, organization commitment and job involvement, will be· explored. Customer Service Attitudes (CSA) Despite all the ruminations about it in the business and trade press, TQM still remains an ambiguous notion, one that often gives rise to as many different definitions as there are observers. Some focus on the presence of organizational systems. Others, the importance of leadership. ., Many stress the need to reduce variation in organizational processes (Deming, 1986). A number · OWL and CSA mn.3 doc 11 fl9tlJ4 page 5 emphasize reducing costs through q~ty improvement (p.B. Crosby, 1979). Still others focus on quality planing, control, and improvement (Juran, 1989). Regardless of these differences, however, the most important, generally agreed upon principle is to be \"customer focused\" (Feigenbaum, 1982). The cornerstone for this principle is the belief that customer satisfaction and customer judgments about the organization and itsproducts are the most important determinants of long-term organizational viability (Oliva, Oliver & MacMillan, 1992). Not surprisingly, this belief is a prominent tenet in both the manufacturing and service sectors alike. Conventional wisdom holds that quality can best be evaluated from the customers' perspective. Certainly, customers can easily articulate how well a product or service meets their expectations. Therefore, managers and researchers must take into account subjective and cognitive factors that influence customers' judgments when trying to identify influential customer cues, rather than just relying on organizational presumptions. Recently, for example, Hannon & Sano (1994) described how customer-driven HR strategies and practices are pervasive in Japan. An example they cited was the practice of making the tOp graduates from the best schools work in low level, customer service jobs for their first 1-2 years so that they might better underst3nd customers and their needs. To be sure, defining quality in terms of whether a product or service meets the expectations ofcustomers is all-encompassing. As a result of the breadth of this issue, and the limited research on this topic, many importantquestions about the service relationship, particularly those penaining to exchanges between employees and customers, linger. Some include, \"What are the key dimensions of service quality?\" and \"What are the actions service employees might direct their efforts to in order to foster good relationships with customers?\" Arguably, the most readily obvious manifestations of quality for any customer are the service attitudes ofemployees. In fact, dming the employee-customer interaction, conventional wisdom holds that employees' customer service attitudes influence customer satisfaction, customer evaluations, and decisions to buy. . OWL and <:SA HJU.3,doc J J129m page 6 According to Rosander (1980), there are five dimensions of service quality: quality of employee performance, facility, data, decision, and outcome. Undoubtedly, the performance of the employee influences customer satisfaction. This phenomenon has been referred to as interactive quality (Lehtinen & Lehtinen, 1982). Parasuraman, Zeithaml, & Berry (1985) go so far as to suggest that service quality is ultimately a function of the relationship between the employee and the customer, not the product or the price. Sasser, Olsen, & Wyckoff (1987) echo the assertion that personnel performance is a critical factor in the satisfaction of customers. If all of them are right, the relationship between satisfaction with quality of work life and customer service attitudes cannot be understated. Measuring Customer Service Attitudes The challenge of measuring service quality has increasingly captured the attention of researchers (Teas, 1994; Cronin & Taylor, 1992). While the substance and determinants of quality may remain undefined, its importance to organizations is unquestionable. Nevertheless, numerous problems inherent in the measurement of customer service attitudes still exist (Reeves & Bednar, 1994). Perhaps the complexities involved in measuring this construct have deterred many researchers from attempting to define and model service quality. Maybe this is also the reason why many of the efforts to define and measure service quality have emanated primarily from manufacturing, rather than service, settings. When it has been measured, quality has sometimes been defined as a \"zero defect\" policy, a perspective the Japanese have embraced. Alternatively, P.B. Crosby (1979) quantifies quality as \"conformance to requirements.\" Garvin (1983; 1988), on the other hand, measures quality in terms ofcounting the incidence of \"internal failures\" and \"external failures.\" Other definitions include \"value\" (Abbot, 1955; Feigenbaum, 1982), \"concordance to specification'\" (Gilmo",
"title": ""
},
{
"docid": "e8792ced13f1be61d031e2b150cc5cf6",
"text": "Scientific literature cites a wide range of values for caffeine content in food products. The authors suggest the following standard values for the United States: coffee (5 oz) 85 mg for ground roasted coffee, 60 mg for instant and 3 mg for decaffeinated; tea (5 oz): 30 mg for leaf/bag and 20 mg for instant; colas: 18 mg/6 oz serving; cocoa/hot chocolate: 4 mg/5 oz; chocolate milk: 4 mg/6 oz; chocolate candy: 1.5-6.0 mg/oz. Some products from the United Kingdom and Denmark have higher caffeine content. Caffeine consumption survey data are limited. Based on product usage and available consumption data, the authors suggest a mean daily caffeine intake for US consumers of 4 mg/kg. Among children younger than 18 years of age who are consumers of caffeine-containing foods, the mean daily caffeine intake is about 1 mg/kg. Both adults and children in Denmark and UK have higher levels of caffeine intake.",
"title": ""
},
{
"docid": "9d1dc15130b9810f6232b4a3c77e8038",
"text": "This paper argues that we should seek the golden middle way between dynamically and statically typed languages.",
"title": ""
},
{
"docid": "8a21ff7f3e4d73233208d5faa70eb7ce",
"text": "Achieving robustness and energy efficiency in nanoscale CMOS process technologies is made challenging due to the presence of process, temperature, and voltage variations. Traditional fault-tolerance techniques such as N-modular redundancy (NMR) employ deterministic error detection and correction, e.g., majority voter, and tend to be power hungry. This paper proposes soft NMR that nontrivially extends NMR by consciously exploiting error statistics caused by nanoscale artifacts in order to design robust and energy-efficient systems. In contrast to conventional NMR, soft NMR employs Bayesian detection techniques in the voter. Soft voter algorithms are obtained through optimization of appropriate application aware cost functions. Analysis indicates that, on average, soft NMR outperforms conventional NMR. Furthermore, unlike NMR, in many cases, soft NMR is able to generate a correct output even when all N replicas are in error. This increase in robustness is then traded-off through voltage scaling to achieve energy efficiency. The design of a discrete cosine transform (DCT) image coder is employed to demonstrate the benefits of the proposed technique. Simulations in a commercial 45 nm, 1.2 V, CMOS process show that soft NMR provides up to 10× improvement in robustness, and 35 percent power savings over conventional NMR.",
"title": ""
},
{
"docid": "373c89beb40ce164999892be2ccb8f46",
"text": "Recent advances in mobile technologies (esp., smart phones and tablets with built-in cameras, GPS and Internet access) made augmented reality (AR ) applications available for the broad public. While many researchers have examined the af fordances and constraints of AR for teaching and learning, quantitative evidence for it s effectiveness is still scarce. To contribute to filling this research gap, we designed and condu cted a pretest-posttest crossover field experiment with 101 participants at a mathematics exh ibition to measure the effect of AR on acquiring and retaining mathematical knowledge in a n informal learning environment. We hypothesized that visitors acquire more knowledge f rom augmented exhibits than from exhibits without AR. The theoretical rationale for our h ypothesis is that AR allows for the efficient and effective implementation of a subset of the des ign principles defined in the cognitive theory of multimedia. The empirical results we obtaine d show that museum visitors performed better on knowledge acquisition and retention tests related to augmented exhibits than to nonaugmented exhibits and that they perceived AR as a valuable and desirable add-on for museum exhibitions.",
"title": ""
},
{
"docid": "591e4719cadd8b9e6dfda932856fffce",
"text": "Over the last two decades, multiple classifier system (MCS) or classifier ensemble has shown great potential to improve the accuracy and reliability of remote sensing image classification. Although there are lots of literatures covering the MCS approaches, there is a lack of a comprehensive literature review which presents an overall architecture of the basic principles and trends behind the design of remote sensing classifier ensemble. Therefore, in order to give a reference point for MCS approaches, this paper attempts to explicitly review the remote sensing implementations of MCS and proposes some modified approaches. The effectiveness of existing and improved algorithms are analyzed and evaluated by multi-source remotely sensed images, including high spatial resolution image (QuickBird), hyperspectral image (OMISII) and multi-spectral image (Landsat ETM+). Experimental results demonstrate that MCS can effectively improve the accuracy and stability of remote sensing image classification, and diversity measures play an active role for the combination of multiple classifiers. Furthermore, this survey provides a roadmap to guide future research, algorithm enhancement and facilitate knowledge accumulation of MCS in remote sensing community.",
"title": ""
},
{
"docid": "fb97b11eba38f84f38b473a09119162a",
"text": "We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS ’15 ). Our construction achieves asymptotically optimal query complexity under very natural conditions on the database and queries.",
"title": ""
},
{
"docid": "5a583fe6fae9f0624bcde5043c56c566",
"text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.",
"title": ""
},
{
"docid": "09e164aa239be608e8c2ba250d168ebc",
"text": "The alarming growth rate of malicious apps has become a serious issue that sets back the prosperous mobile ecosystem. A recent report indicates that a new malicious app for Android is introduced every 10 s. To combat this serious malware campaign, we need a scalable malware detection approach that can effectively and efficiently identify malware apps. Numerous malware detection tools have been developed, including system-level and network-level approaches. However, scaling the detection for a large bundle of apps remains a challenging task. In this paper, we introduce Significant Permission IDentification (SigPID), a malware detection system based on permission usage analysis to cope with the rapid increase in the number of Android malware. Instead of extracting and analyzing all Android permissions, we develop three levels of pruning by mining the permission data to identify the most significant permissions that can be effective in distinguishing between benign and malicious apps. SigPID then utilizes machine-learning-based classification methods to classify different families of malware and benign apps. Our evaluation finds that only 22 permissions are significant. We then compare the performance of our approach, using only 22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when a support vector machine is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which are about the same as those produced by the baseline approach while incurring the analysis times that are 4–32 times less than those of using all permissions. Compared against other state-of-the-art approaches, SigPID is more effective by detecting 93.62% of malware in the dataset and 91.4% unknown/new malware samples.",
"title": ""
},
{
"docid": "c7857bde224ef6252602798c349beb44",
"text": "Context Several studies show that people with low health literacy skills have poorer health-related knowledge and comprehension. Contribution This updated systematic review of 96 studies found that low health literacy is associated with poorer ability to understand and follow medical advice, poorer health outcomes, and differential use of some health care services. Caution No studies examined the relationship between oral literacy (speaking and listening skills) and outcomes. Implication Although it is challenging, we need to find feasible ways to improve patients' health literacy skills and reduce the negative effects of low health literacy on outcomes. The Editors The term health literacy refers to a set of skills that people need to function effectively in the health care environment (1). These skills include the ability to read and understand text and to locate and interpret information in documents (print literacy); use quantitative information for tasks, such as interpreting food labels, measuring blood glucose levels, and adhering to medication regimens (numeracy); and speak and listen effectively (oral literacy) (2, 3). Approximately 80 million U.S. adults are thought to have limited health literacy, which puts them at risk for poorer health outcomes. Rates of limited health literacy are higher among elderly, minority, and poor persons and those with less than a high school education (4). Numerous policy and advocacy organizations have expressed concern about barriers caused by low health literacy, notably the Institute of Medicine's report Health Literacy: A Prescription to End Confusion in 2004 (5) and the U.S. Department of Health and Human Services' report National Action Plan to Improve Health Literacy in 2010 (6). To understand the relationship between health literacy level and use of health care services, health outcomes, costs, and disparities in health outcomes, we conducted a systematic evidence review for the Agency for Healthcare Research and Quality (AHRQ) (published in 2004), which was limited to the relationship between print literacy and health outcomes (7). We found a consistent association between low health literacy (measured by reading skills) and more limited health-related knowledge and comprehension. The relationship between health literacy level and other outcomes was less clear, primarily because of a lack of studies and relatively unsophisticated methods in the available studies. In this review, we update and expand the earlier review (7). Since 2004, researchers have conducted new and more sophisticated studies. Thus, in synthesizing the literature, we can now consider the relationship between outcomes and health literacy (print literacy alone or combined with numeracy) and between outcomes and the numeracy component of health literacy alone. Methods We developed and followed a protocol that used standard AHRQ Evidence-based Practice Center methods. The full report describes study methods in detail and presents evidence tables for each included study (1). Literature Search We searched MEDLINE, CINAHL, the Cochrane Library, PsycINFO, and ERIC databases. For health literacy, our search dates were from 2003 to May 2010. For numeracy, they were from 1966 to May 2010; we began at an earlier date because numeracy was not addressed in our 2004 review. For this review, we updated our searches beyond what was included in the full report from May 2010 through 22 February 2011 to be current with the most recent literature. No Medical Subject Heading terms specifically identify health literacyrelated articles, so we conducted keyword searches, including health literacy, literacy, numeracy, and terms or phrases used to identify related measurement instruments. We also hand-searched reference lists of pertinent review articles and editorials. Appendix Table 1 shows the full search strategy. Appendix Table 1. Search Strategy Study Selection We included English-language studies on persons of all ages whose health literacy or that of their caregivers (including numeracy or oral health literacy) had been measured directly and had not been self-reported. Studies had to compare participants in relation to an outcome, including health care access and service use, health outcomes, and costs of care. For numeracy studies, outcomes also included knowledge, because our earlier review had established the relationship between only health literacy and knowledge. We did not examine outcomes concerning attitudes, social norms, or patientprovider relationships. Data Abstraction and Quality Assessment After determining article inclusion, 1 reviewer entered study data into evidence tables; a second, senior reviewer checked the information for accuracy and completeness. Two reviewers independently rated the quality of studies as good, fair, or poor by using criteria designed to detect potential risk of bias in an observational study (including selection bias, measurement bias, and control for potential confounding) and precision of measurement. Data Synthesis and Strength of Evidence We assessed the overall strength of the evidence for each outcome separately for studies measuring health literacy and those measuring numeracy on the basis of information only from good- and fair-quality studies. Using AHRQ guidance (8), we graded the strength of evidence as high, moderate, low, or insufficient on the basis of the potential risk of bias of included studies, consistency of effect across studies, directness of the evidence, and precision of the estimate (Table 1). We determined the grade on the basis of the literature from the update searches. We then considered whether the findings from the 2004 review would alter our conclusions. We graded the body of evidence for an outcome as low if the evidence was limited to 1 study that controlled for potential confounding variables or to several small studies in which all, or only some, controlled for potential confounding variables or as insufficient if findings across studies were inconsistent or were limited to 1 unadjusted study. Because of heterogeneity across studies in their approaches to measuring health literacy, numeracy, and outcomes, we summarized the evidence through consensus discussions and did not conduct any meta-analyses. Table 1. Strength of Evidence Grades and Definitions Role of the Funding Source AHRQ reviewed a draft report and provided copyright release for this manuscript. The funding source did not participate in conducting literature searches, determining study eligibility, evaluating individual studies, grading evidence, or interpreting results. Results First, we present the results from our literature search and a summary of characteristics across studies, followed by findings specific to health literacy then numeracy. We generally highlight evidence of moderate or high strength and mention only outcomes with low or insufficient evidence. Where relevant, we comment on the evidence provided through the 2004 review. Tables 2 and 3 summarize our findings and strength-of-evidence grade for each included health literacy and numeracy outcome, respectively. Table 2. Health Literacy Outcome Results: Strength of Evidence and Summary of Findings, 2004 and 2011 Table 3. Numeracy Outcome Results: Strength of Evidence and Summary of Findings, 2011 Characteristics of Reviewed Studies We identified 3823 citations and evaluated 1012 full-text articles (Appendix Figure). Ultimately, we included 96 studies rated as good or fair quality. These studies were reported in 111 articles because some investigators reported study results in multiple publications (98 articles on health literacy, 22 on numeracy, and 9 on both). We found no studies that examined outcomes by the oral (verbal) component of health literacy. Of the 111 articles, 100 were rated as fair quality. All studies were observational, primarily cross-sectional designs (91 of 111 articles). The Supplement (health literacy) and Appendix Table 2 (numeracy) present summary information for each included article. Supplement. Overview of Health Literacy Studies Appendix Figure. Summary of evidence search and selection. KQ = key question. Appendix Table 2. Overview of Numeracy Studies Studies varied in their measurement of health literacy and numeracy. Commonly used instruments to measure health literacy are the Rapid Estimate of Adult Literacy in Medicine (REALM) (9), the Test of Functional Health Literacy in Adults (TOFHLA) (10), and short TOFHLA (S-TOFHLA). Instruments frequently used to measure numeracy are the SchwartzWoloshin Numeracy Test (11) and the Wide Range Achievement Test (WRAT) math subtest (12). Studies also differed in how investigators distinguished between levels or thresholds of health literacyeither as a continuous measure or as categorical groups. Some studies identified 3 groups, often called inadequate, marginal, and adequate, whereas others combined 2 of the 3 groups. Because evidence was sparse for evaluating differences between marginal and adequate health literacy, our results focus on the differences between the lowest and highest groups. Studies in this update generally included multivariate analyses rather than simpler unadjusted analyses. They varied considerably, however, in regard to which potential confounding variables are controlled (Supplement and Appendix Table 2). All results reported here are from adjusted analyses that controlled for potential confounding variables, unless otherwise noted. Relationship Between Health Literacy and Outcomes Use of Health Care Services and Access to Care Emergency Care and Hospitalizations. Nine studies examining the risk for emergency care use (1321) and 6 examining the risk for hospitalizations (1419) provided moderate evidence showing increased use of both services among people with lower health literacy, including elderly persons, clinic and inner-city hospital patients, patients with asthma, and patients with congestive heart failure.",
"title": ""
},
{
"docid": "f6fa1c4ce34f627d9d7d1ca702272e26",
"text": "One of the most difficult aspects in rhinoplasty is resolving and preventing functional compromise of the nasal valve area reliably. The nasal valves are crucial for the individual breathing competence of the nose. Structural and functional elements contribute to this complex system: the nasolabial angle, the configuration and stability of the alae, the function of the internal nasal valve, the anterior septum symmetrically separating the bilateral airways and giving structural and functional support to the alar cartilage complex and to their junction with the upper lateral cartilages, the scroll area. Subsequently, the open angle between septum and sidewalls is important for sufficient airflow as well as the position and function of the head of the turbinates. The clinical examination of these elements is described. Surgical techniques are more or less well known and demonstrated with patient examples and drawings: anterior septoplasty, reconstruction of tip and dorsum support by septal extension grafts and septal replacement, tip suspension and lateral crural sliding technique, spreader grafts and suture techniques, splay grafts, alar batten grafts, lateral crural extension grafts, and lateral alar suspension. The numerous literature is reviewed.",
"title": ""
},
{
"docid": "f3a044835e9cbd0c13218ab0f9c06dd1",
"text": "Among the various human factors impinging upon making a decision in an uncertain environment, risk and trust are surely crucial ones. Several models for trust have been proposed in the literature but few explicitly take risk into account. This paper analyses the relationship between the two concepts by first looking at how a decision is made to enter into a transaction based on the risk information. We then draw a model of the invested fraction of the capital function of a decision surface. We finally define a model of trust composed of a reliability trust as the probability of transaction success and a decision trust derived from the decision surface.",
"title": ""
},
{
"docid": "03e7070b1eb755d792564077f65ea012",
"text": "The widespread use of online social networks (OSNs) to disseminate information and exchange opinions, by the general public, news media, and political actors alike, has enabled new avenues of research in computational political science. In this paper, we study the problem of quantifying and inferring the political leaning of Twitter users. We formulate political leaning inference as a convex optimization problem that incorporates two ideas: (a) users are consistent in their actions of tweeting and retweeting about political issues, and (b) similar users tend to be retweeted by similar audience. We then apply our inference technique to 119 million election-related tweets collected in seven months during the 2012 U.S. presidential election campaign. On a set of frequently retweeted sources, our technique achieves 94 percent accuracy and high rank correlation as compared with manually created labels. By studying the political leaning of 1,000 frequently retweeted sources, 232,000 ordinary users who retweeted them, and the hashtags used by these sources, our quantitative study sheds light on the political demographics of the Twitter population, and the temporal dynamics of political polarization as events unfold.",
"title": ""
},
{
"docid": "b294a3541182e3195254e83b092f537d",
"text": "This paper describes a new project intended to provide a firmer theoretical and empirical foundation for such tasks as enterprise modeling, enterprise integration, and process re-engineering. The project includes ( 1 ) collecting examples of how different organizations perform sim'lar processes, and ( 2 ) representing these examples in an on-line \"process handbook\" which includes the relative advantages of the alternatives. The handbook is intended to help (a) redesign existing Organizational processes, ( b ) invent new organizational processes that take advantage of information technology, and perhaps (e ) automatically generate sofivare to support organizational processes. A key element of the work is a novel approach to representing processes at various levels of abstraction. This approach uses ideas from computer science about inheritance and from coordinalion theory about managing dependencies. Its primary advantage is that it allows users to explicitly represent the similarities (and differences) among related processes and to easily find or generate sensible alternatives for how a given process could be",
"title": ""
},
{
"docid": "157f5ef02675b789df0f893311a5db72",
"text": "We present a novel spectral shading model for human skin. Our model accounts for both subsurface and surface scattering, and uses only four parameters to simulate the interaction of light with human skin. The four parameters control the amount of oil, melanin and hemoglobin in the skin, which makes it possible to match specific skin types. Using these parameters we generate custom wavelength dependent diffusion profiles for a two-layer skin model that account for subsurface scattering within the skin. These diffusion profiles are computed using convolved diffusion multipoles, enabling an accurate and rapid simulation of the subsurface scattering of light within skin. We combine the subsurface scattering simulation with a Torrance-Sparrow BRDF model to simulate the interaction of light with an oily layer at the surface of the skin. Our results demonstrate that this four parameter model makes it possible to simulate the range of natural appearance of human skin including African, Asian, and Caucasian skin types.",
"title": ""
},
{
"docid": "57502ae793808fded7d446a3bb82ca74",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "950d7d10b09f5d13e09692b2a4576c00",
"text": "Prebiotics, as currently conceived of, are all carbohydrates of relatively short chain length. To be effective they must reach the cecum. Present evidence concerning the 2 most studied prebiotics, fructooligosaccharides and inulin, is consistent with their resisting digestion by gastric acid and pancreatic enzymes in vivo. However, the wide variety of new candidate prebiotics becoming available for human use requires that a manageable set of in vitro tests be agreed on so that their nondigestibility and fermentability can be established without recourse to human studies in every case. In the large intestine, prebiotics, in addition to their selective effects on bifidobacteria and lactobacilli, influence many aspects of bowel function through fermentation. Short-chain fatty acids are a major product of prebiotic breakdown, but as yet, no characteristic pattern of fermentation acids has been identified. Through stimulation of bacterial growth and fermentation, prebiotics affect bowel habit and are mildly laxative. Perhaps more importantly, some are a potent source of hydrogen in the gut. Mild flatulence is frequently observed by subjects being fed prebiotics; in a significant number of subjects it is severe enough to be unacceptable and to discourage consumption. Prebiotics are like other carbohydrates that reach the cecum, such as nonstarch polysaccharides, sugar alcohols, and resistant starch, in being substrates for fermentation. They are, however, distinctive in their selective effect on the microflora and their propensity to produce flatulence.",
"title": ""
},
{
"docid": "e541be7c81576fdef564fd7eba5d67dd",
"text": "As the cost of massively broadband® semiconductors continue to be driven down at millimeter wave (mm-wave) frequencies, there is great potential to use LMDS spectrum (in the 28-38 GHz bands) and the 60 GHz band for cellular/mobile and peer-to-peer wireless networks. This work presents urban cellular and peer-to-peer RF wideband channel measurements using a broadband sliding correlator channel sounder and steerable antennas at carrier frequencies of 38 GHz and 60 GHz, and presents measurements showing the propagation time delay spread and path loss as a function of separation distance and antenna pointing angles for many types of real-world environments. The data presented here show that at 38 GHz, unobstructed Line of Site (LOS) channels obey free space propagation path loss while non-LOS (NLOS) channels have large multipath delay spreads and can exploit many different pointing angles to provide propagation links. At 60 GHz, there is notably more path loss, smaller delay spreads, and fewer unique antenna angles for creating a link. For both 38 GHz and 60 GHz, we demonstrate empirical relationships between the RMS delay spread and antenna pointing angles, and observe that excess path loss (above free space) has an inverse relationship with transmitter-to-receiver separation distance.",
"title": ""
},
{
"docid": "5392e45840929b05b549a64a250774e5",
"text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.",
"title": ""
}
] | scidocsrr |
acba07b0f0738c55be978ceeccf1a993 | Emotion Recognition Based on Joint Visual and Audio Cues | [
{
"docid": "8877d6753d6b7cd39ba36c074ca56b00",
"text": "Perhaps the most fundamental application of affective computing will be Human-Computer Interaction (HCI) in which the computer should have the ability to detect and track the user's affective states, and make corresponding feedback. The human multi-sensor affect system defines the expectation of multimodal affect analyzer. In this paper, we present our efforts toward audio-visual HCI-related affect recognition. With HCI applications in mind, we take into account some special affective states which indicate users' cognitive/motivational states. Facing the fact that a facial expression is influenced by both an affective state and speech content, we apply a smoothing method to extract the information of the affective state from facial features. In our fusion stage, a voting method is applied to combine audio and visual modalities so that the final affect recognition accuracy is greatly improved. We test our bimodal affect recognition approach on 38 subjects with 11 HCI-related affect states. The extensive experimental results show that the average person-dependent affect recognition accuracy is almost 90% for our bimodal fusion.",
"title": ""
},
{
"docid": "d9ffb9e4bba1205892351b1328977f6c",
"text": "Bayesian network models provide an attractive framework for multimodal sensor fusion. They combine an intuitive graphical representation with efficient algorithms for inference and learning. However, the unsupervised nature of standard parameter learning algorithms for Bayesian networks can lead to poor performance in classification tasks. We have developed a supervised learning framework for Bayesian networks, which is based on the Adaboost algorithm of Schapire and Freund. Our framework covers static and dynamic Bayesian networks with both discrete and continuous states. We have tested our framework in the context of a novel multimodal HCI application: a speech-based command and control interface for a Smart Kiosk. We provide experimental evidence for the utility of our boosted learning approach.",
"title": ""
},
{
"docid": "c8e321ac8b32643ac9cbe151bb9e5f8f",
"text": "The most expressive way humans display emotions is through facial expressions. In this work we report on several advances we have made in building a system for classification of facial expressions from continuous video input. We introduce and test different Bayesian network classifiers for classifying expressions from video, focusing on changes in distribution assumptions, and feature dependency structures. In particular we use Naive–Bayes classifiers and change the distribution from Gaussian to Cauchy, and use Gaussian Tree-Augmented Naive Bayes (TAN) classifiers to learn the dependencies among different facial motion features. We also introduce a facial expression recognition from live video input using temporal cues. We exploit the existing methods and propose a new architecture of hidden Markov models (HMMs) for automatically segmenting and recognizing human facial expression from video sequences. The architecture performs both segmentation and recognition of the facial expressions automatically using a multi-level architecture composed of an HMM layer and a Markov model layer. We explore both person-dependent and person-independent recognition of expressions and compare the different methods. 2003 Elsevier Inc. All rights reserved. * Corresponding author. E-mail addresses: [email protected] (I. Cohen), [email protected] (N. Sebe), ashutosh@ us.ibm.com (A. Garg), [email protected] (L. Chen), [email protected] (T.S. Huang). 1077-3142/$ see front matter 2003 Elsevier Inc. All rights reserved. doi:10.1016/S1077-3142(03)00081-X I. Cohen et al. / Computer Vision and Image Understanding 91 (2003) 160–187 161",
"title": ""
}
] | [
{
"docid": "e0ee4f306bb7539d408f606d3c036ac5",
"text": "Despite the growing popularity of mobile web browsing, the energy consumed by a phone browser while surfing the web is poorly understood. We present an infrastructure for measuring the precise energy used by a mobile browser to render web pages. We then measure the energy needed to render financial, e-commerce, email, blogging, news and social networking sites. Our tools are sufficiently precise to measure the energy needed to render individual web elements, such as cascade style sheets (CSS), Javascript, images, and plug-in objects. Our results show that for popular sites, downloading and parsing cascade style sheets and Javascript consumes a significant fraction of the total energy needed to render the page. Using the data we collected we make concrete recommendations on how to design web pages so as to minimize the energy needed to render the page. As an example, by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience. We conclude by estimating the point at which offloading browser computations to a remote proxy can save energy on the phone.",
"title": ""
},
{
"docid": "10994a99bb4da87a34d835720d005668",
"text": "Wireless sensor networks (WSNs), consisting of a large number of nodes to detect ambient environment, are widely deployed in a predefined area to provide more sophisticated sensing, communication, and processing capabilities, especially concerning the maintenance when hundreds or thousands of nodes are required to be deployed over wide areas at the same time. Radio frequency identification (RFID) technology, by reading the low-cost passive tags installed on objects or people, has been widely adopted in the tracing and tracking industry and can support an accurate positioning within a limited distance. Joint utilization of WSN and RFID technologies is attracting increasing attention within the Internet of Things (IoT) community, due to the potential of providing pervasive context-aware applications with advantages from both fields. WSN-RFID convergence is considered especially promising in context-aware systems with indoor positioning capabilities, where data from deployed WSN and RFID systems can be opportunistically exploited to refine and enhance the collected data with position information. In this papera, we design and evaluate a hybrid system which combines WSN and RFID technologies to provide an indoor positioning service with the capability of feeding position information into a general-purpose IoT environment. Performance of the proposed system is evaluated by means of simulations and a small-scale experimental set-up. The performed analysis demonstrates that the joint use of heterogeneous technologies can increase the robustness and the accuracy of the indoor positioning systems.",
"title": ""
},
{
"docid": "1c6bf44a2fea9e9b1ffc015759f8986f",
"text": "Convolutional neural networks (CNNs) typically suffer from slow convergence rates in training, which limits their wider application. This paper presents a new CNN learning approach, based on second-order methods, aimed at improving: a) Convergence rates of existing gradient-based methods, and b) Robustness to the choice of learning hyper-parameters (e.g., learning rate). We derive an efficient back-propagation algorithm for simultaneously computing both gradients and second derivatives of the CNN's learning objective. These are then input to a Long Short Term Memory (LSTM) to predict optimal updates of CNN parameters in each learning iteration. Both meta-learning of the LSTM and learning of the CNN are conducted jointly. Evaluation on image classification demonstrates that our second-order backpropagation has faster convergences rates than standard gradient-based learning for the same CNN, and that it converges to better optima leading to better performance under a budgeted time for learning. We also show that an LSTM learned to learn a small CNN network can be readily used for learning a larger network.",
"title": ""
},
{
"docid": "564045d00d2e347252fda301a332f30a",
"text": "In this contribution, the control of a reverse osmosis desalination plant by using an optimal multi-loop approach is presented. Controllers are assumed to be players of a cooperative game, whose solution is obtained by multi-objective optimization (MOO). The MOO problem is solved by applying a genetic algorithm and the final solution is found from this Pareto set. For the reverse osmosis plant a control scheme consisting of two PI control loops are proposed. Simulation results show that in some cases, as for example this desalination plant, multi-loop control with several controllers, which have been obtained by join multi-objective optimization, perform as good as more complex controllers but with less implementation effort.",
"title": ""
},
{
"docid": "848e56ec20ccab212567087178e36979",
"text": "The technologies of mobile communications pervade our society and wireless networks sense the movement of people, generating large volumes of mobility data, such as mobile phone call records and Global Positioning System (GPS) tracks. In this work, we illustrate the striking analytical power of massive collections of trajectory data in unveiling the complexity of human mobility. We present the results of a large-scale experiment, based on the detailed trajectories of tens of thousands private cars with on-board GPS receivers, tracked during weeks of ordinary mobile activity. We illustrate the knowledge discovery process that, based on these data, addresses some fundamental questions of mobility analysts: what are the frequent patterns of people’s travels? How big attractors and extraordinary events influence mobility? How to predict areas of dense traffic in the near future? How to characterize traffic jams and congestions? We also describe M-Atlas, the querying and mining language and system that makes this analytical process possible, providing the mechanisms to master the complexity of transforming raw GPS tracks into mobility knowledge. M-Atlas is centered onto the concept of a trajectory, and the mobility knowledge discovery process can be specified by M-Atlas queries that realize data transformations, data-driven estimation of the parameters of the mining methods, the quality assessment of the obtained results, the quantitative and visual exploration of the discovered behavioral patterns and models, the composition of mined patterns, models and data with further analyses and mining, and the incremental mining strategies to address scalability.",
"title": ""
},
{
"docid": "e8e658d677a3b1a23650b25edd32fc84",
"text": "The aim of the study is to facilitate the suture on the sacral promontory for laparoscopic sacrocolpopexy. We hypothesised that a new method of sacral anchorage using a biosynthetic material, the polyether ether ketone (PEEK) harpoon, might be adequate because of its tensile strength, might reduce complications owing to its well-known biocompatibility, and might shorten the duration of surgery. We verified the feasibility of insertion and quantified the stress resistance of the harpoons placed in the promontory in nine fresh cadavers, using four stress tests in each case. Mean values were analysed and compared using the Wilcoxon and Fisher’s exact tests. The harpoon resists for at least 30 s against a pulling force of 1 N, 5 N and 10 N. Maximum tensile strength is 21 N for the harpoon and 32 N for the suture. Harpoons broke in 6 % and threads in 22 % of cases. Harpoons detached owing to ligament rupture in 64 % of the cases. Regarding failures of the whole complex, the failure involves the harpoon in 92 % of cases and the thread in 56 %. The four possible placements of the harpoon in the promontory were equally safe in terms of resistance to traction. The PEEK harpoon can be easily anchored in the promontory. Thread is more resistant to traction than the harpoon, but the latter makes the surgical technique easier. Any of the four locations tested is feasible for anchoring the device.",
"title": ""
},
{
"docid": "4d383a53c180d5dc4473ab9d7795639a",
"text": "With pervasive applications of medical imaging in health-care, biomedical image segmentation plays a central role in quantitative analysis, clinical diagnosis, and medical intervention. Since manual annotation suffers limited reproducibility, arduous efforts, and excessive time, automatic segmentation is desired to process increasingly larger scale histopathological data. Recently, deep neural networks (DNNs), particularly fully convolutional networks (FCNs), have been widely applied to biomedical image segmentation, attaining much improved performance. At the same time, quantization of DNNs has become an active research topic, which aims to represent weights with less memory (precision) to considerably reduce memory and computation requirements of DNNs while maintaining acceptable accuracy. In this paper, we apply quantization techniques to FCNs for accurate biomedical image segmentation. Unlike existing literatures on quantization which primarily targets memory and computation complexity reduction, we apply quantization as a method to reduce overfitting in FCNs for better accuracy. Specifically, we focus on a state-of-the-art segmentation framework, suggestive annotation [26], which judiciously extracts representative annotation samples from the original training dataset, obtaining an effective small-sized balanced training dataset. We develop two new quantization processes for this framework: (1) suggestive annotation with quantization for highly representative training samples, and (2) network training with quantization for high accuracy. Extensive experiments on the MICCAI Gland dataset show that both quantization processes can improve the segmentation performance, and our proposed method exceeds the current state-of-the-art performance by up to 1%. In addition, our method has a reduction of up to 6.4x on memory usage.",
"title": ""
},
{
"docid": "71b31941082d639dfc6178ff74fba487",
"text": "This paper describes ETH Zurich’s submission to the TREC 2016 Clinical Decision Support (CDS) track. In three successive stages, we apply query expansion based on literal as well as semantic term matches, rank documents in a negation-aware manner and, finally, re-rank them based on clinical intent types as well as semantic and conceptual affinity to the medical case in question. Empirical results show that the proposed method can distill patient representations from raw clinical notes that result in a retrieval performance superior to that of manually constructed case descriptions.",
"title": ""
},
{
"docid": "3be0bd7f02c941f32903f6ad2379f45b",
"text": "Spinal cord injury induces the disruption of blood-spinal cord barrier and triggers a complex array of tissue responses, including endoplasmic reticulum (ER) stress and autophagy. However, the roles of ER stress and autophagy in blood-spinal cord barrier disruption have not been discussed in acute spinal cord trauma. In the present study, we respectively detected the roles of ER stress and autophagy in blood-spinal cord barrier disruption after spinal cord injury. Besides, we also detected the cross-talking between autophagy and ER stress both in vivo and in vitro. ER stress inhibitor, 4-phenylbutyric acid, and autophagy inhibitor, chloroquine, were respectively or combinedly administrated in the model of acute spinal cord injury rats. At day 1 after spinal cord injury, blood-spinal cord barrier was disrupted and activation of ER stress and autophagy were involved in the rat model of trauma. Inhibition of ER stress by treating with 4-phenylbutyric acid decreased blood-spinal cord barrier permeability, prevented the loss of tight junction (TJ) proteins and reduced autophagy activation after spinal cord injury. On the contrary, inhibition of autophagy by treating with chloroquine exacerbated blood-spinal cord barrier permeability, promoted the loss of TJ proteins and enhanced ER stress after spinal cord injury. When 4-phenylbutyric acid and chloroquine were combinedly administrated in spinal cord injury rats, chloroquine abolished the blood-spinal cord barrier protective effect of 4-phenylbutyric acid by exacerbating ER stress after spinal cord injury, indicating that the cross-talking between autophagy and ER stress may play a central role on blood-spinal cord barrier integrity in acute spinal cord injury. The present study illustrates that ER stress induced by spinal cord injury plays a detrimental role on blood-spinal cord barrier integrity, on the contrary, autophagy induced by spinal cord injury plays a furthersome role in blood-spinal cord barrier integrity in acute spinal cord injury.",
"title": ""
},
{
"docid": "c27ba892408391234da524ffab0e7418",
"text": "Sunlight and skylight are rarely rendered correctly in computer graphics. A major reason for this is high computational expense. Another is that precise atmospheric data is rarely available. We present an inexpensive analytic model that approximates full spectrum daylight for various atmospheric conditions. These conditions are parameterized using terms that users can either measure or estimate. We also present an inexpensive analytic model that approximates the effects of atmosphere (aerial perspective). These models are fielded in a number of conditions and intermediate results verified against standard literature from atmospheric science. Our goal is to achieve as much accuracy as possible without sacrificing usability.",
"title": ""
},
{
"docid": "be3640467394a0e0b5a5035749b442e9",
"text": "Data pre-processing is an important and critical step in the data mining process and it has a huge impact on the success of a data mining project.[1](3) Data pre-processing is a step of the Knowledge discovery in databases (KDD) process that reduces the complexity of the data and offers better conditions to subsequent analysis. Through this the nature of the data is better understood and the data analysis is performed more accurately and efficiently. Data pre-processing is challenging as it involves extensive manual effort and time in developing the data operation scripts. There are a number of different tools and methods used for pre-processing, including: sampling, which selects a representative subset from a large population of data; transformation, which manipulates raw data to produce a single input; denoising, which removes noise from data; normalization, which organizes data for more efficient access; and feature extraction, which pulls out specified data that is significant in some particular context. Pre-processing technique is also useful for association rules algo. LikeAprior, Partitioned, Princer-search algo. and many more algos.",
"title": ""
},
{
"docid": "566913d3a3d2e8fe24d6f5ff78440b94",
"text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.",
"title": ""
},
{
"docid": "3aa36b86391a2596ea1fe1fe75470362",
"text": "Experimental and computational studies of the hovering performance of microcoaxial shrouded rotors were carried out. The ATI Mini Multi-Axis Force/Torque Transducer system was used to measure all six components of the force and moment. Meanwhile, numerical simulation of flow field around rotor was carried out using sliding mesh method and multiple reference frame technique by ANASYS FLUENT. The computational results were well agreed with experimental data. Several important factors, such as blade pitch angle, rotor spacing and tip clearance, which influence the performance of shrouded coaxial rotor are studied in detail using CFD method in this paper. Results shows that, evaluated in terms of Figure of Merit, open coaxial rotor is suited for smaller pitch angle condition while shrouded coaxial rotor is suited for larger pitch angle condition. The negative pressure region around the shroud lip is the main source of the thrust generation. In order to have a better performance for shrouded coaxial rotor, the tip clearance must be smaller. The thrust sharing of upper- and lower-rotor is also discussed in this paper.",
"title": ""
},
{
"docid": "785bd7171800d3f2f59f90838a84dc37",
"text": "BACKGROUND\nCancer is considered to develop due to disruptions in the tissue microenvironment in addition to genetic disruptions in the tumor cells themselves. The two most important microenvironmental disruptions in cancer are arguably tissue hypoxia and disrupted circadian rhythmicity. Endothelial cells, which line the luminal side of all blood vessels transport oxygen or endocrine circadian regulators to the tissue and are therefore of key importance for circadian disruption and hypoxia in tumors.\n\n\nSCOPE OF REVIEW\nHere I review recent findings on the role of circadian rhythms and hypoxia in cancer and metastasis, with particular emphasis on how these pathways link tumor metastasis to pathological functions of blood vessels. The involvement of disrupted cell metabolism and redox homeostasis in this context and the use of novel zebrafish models for such studies will be discussed.\n\n\nMAJOR CONCLUSIONS\nCircadian rhythms and hypoxia are involved in tumor metastasis on all levels from pathological deregulation of the cell to the tissue and the whole organism. Pathological tumor blood vessels cause hypoxia and disruption in circadian rhythmicity which in turn drives tumor metastasis. Zebrafish models may be used to increase our understanding of the mechanisms behind hypoxia and circadian regulation of metastasis.\n\n\nGENERAL SIGNIFICANCE\nDisrupted blood flow in tumors is currently seen as a therapeutic goal in cancer treatment, but may drive invasion and metastasis via pathological hypoxia and circadian clock signaling. Understanding the molecular details behind such regulation is important to optimize treatment for patients with solid tumors in the future. This article is part of a Special Issue entitled Redox regulation of differentiation and de-differentiation.",
"title": ""
},
{
"docid": "a398f3f5b670a9d2c9ae8ad84a4a3cb8",
"text": "This project deals with online simultaneous localization and mapping (SLAM) problem without taking any assistance from Global Positioning System (GPS) and Inertial Measurement Unit (IMU). The main aim of this project is to perform online odometry and mapping in real time using a 2-axis lidar mounted on a robot. This involves use of two algorithms, the first of which runs at a higher frequency and uses the collected data to estimate velocity of the lidar which is fed to the second algorithm, a scan registration and mapping algorithm, to perform accurate matching of point cloud data.",
"title": ""
},
{
"docid": "fada1434ec6e060eee9a2431688f82f3",
"text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.",
"title": ""
},
{
"docid": "3ca76a840ac35d94677fa45c767e61f1",
"text": "A three dimensional (3-D) imaging system is implemented by employing 2-D range migration algorithm (RMA) for frequency modulated continuous wave synthetic aperture radar (FMCW-SAR). The backscattered data of a 1-D synthetic aperture at specific altitudes are coherently integrated to form 2-D images. These 2-D images at different altitudes are stitched vertically to form a 3-D image. Numerical simulation for near-field scenario are also presented to validate the proposed algorithm.",
"title": ""
},
{
"docid": "e82681b5140f3a9b283bbd02870f18d5",
"text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization",
"title": ""
},
{
"docid": "ba573c3dd5206e7f71be11d030060484",
"text": "The availability of camera phones provides people with a mobile platform for decoding bar codes, whereas conventional scanners lack mobility. However, using a normal camera phone in such applications is challenging due to the out-of-focus problem. In this paper, we present the research effort on the bar code reading algorithms using a VGA camera phone, NOKIA 7650. EAN-13, a widely used 1D bar code standard, is taken as an example to show the efficiency of the method. A wavelet-based bar code region location and knowledge-based bar code segmentation scheme is applied to extract bar code characters from poor-quality images. All the segmented bar code characters are input to the recognition engine, and based on the recognition distance, the bar code character string with the smallest total distance is output as the final recognition result of the bar code. In order to train an efficient recognition engine, the modified Generalized Learning Vector Quantization (GLVQ) method is designed for optimizing a feature extraction matrix and the class reference vectors. 19 584 samples segmented from more than 1000 bar code images captured by NOKIA 7650 are involved in the training process. Testing on 292 bar code images taken by the same phone, the correct recognition rate of the entire bar code set reaches 85.62%. We are confident that auto focus or macro modes on camera phones will bring the presented method into real world mobile use.",
"title": ""
}
] | scidocsrr |
fb6e29a915d2343b5b0810ff1c8b2bb1 | Gaussian Process Regression for Fingerprinting based Localization | [
{
"docid": "aa3da820fe9e98cb4f817f6a196c18e7",
"text": "Location awareness is an important capability for mobile computing. Yet inexpensive, pervasive positioning—a requirement for wide-scale adoption of location-aware computing—has been elusive. We demonstrate a radio beacon-based approach to location, called Place Lab, that can overcome the lack of ubiquity and high-cost found in existing location sensing approaches. Using Place Lab, commodity laptops, PDAs and cell phones estimate their position by listening for the cell IDs of fixed radio beacons, such as wireless access points, and referencing the beacons’ positions in a cached database. We present experimental results showing that 802.11 and GSM beacons are sufficiently pervasive in the greater Seattle area to achieve 20-40 meter median accuracy with nearly 100% coverage measured by availability in people’s daily",
"title": ""
}
] | [
{
"docid": "9eca9a069f8d1e7bf7c0f0b74e3129f0",
"text": "With increasing use of GPS devices more and more location-based information is accessible. Thus not only more movements of people are tracked but also POI (point of interest) information becomes available in increasing geo-spatial density. To enable analysis of movement behavior, we present an approach to enrich trajectory data with semantic POI information and show how additional insights can be gained. Using a density-based clustering technique we extract 1.215 frequent destinations of ~150.000 user movements from a large e-mobility database. We query available context information from Foursquare, a popular location-based social network, to enrich the destinations with semantic background. As GPS measurements can be noisy, often more then one possible destination is found and movement patterns vary over time. Therefore we present highly interactive visualizations that enable an analyst to cope with the inherent geospatial and behavioral uncertainties.",
"title": ""
},
{
"docid": "fbb6c8566fbe79bf8f78af0dc2dedc7b",
"text": "Automatic essay evaluation (AEE) systems are designed to assist a teacher in the task of classroom assessment in order to alleviate the demands of manual subject evaluation. However, although numerous AEE systems are available, most of these systems do not use elaborate domain knowledge for evaluation, which limits their ability to give informative feedback to students and also their ability to constructively grade a student based on a particular domain of study. This paper is aimed at improving on the achievements of previous studies by providing a subject-focussed evaluation system that considers the domain knowledge while scoring and provides informative feedback to its user. The study employs a combination of techniques such as system design and modelling using Unified Modelling Language (UML), information extraction, ontology development, data management, and semantic matching in order to develop a prototype subject-focussed AEE system. The developed system was evaluated to determine its level of performance and usability. The result of the usability evaluation showed that the system has an overall mean rating of 4.17 out of maximum of 5, which indicates ‘good usability’. In terms of performance, the assessment done by the system was also found to have sufficiently high correlation with those done by domain experts, in addition to providing appropriate feedback to the user.",
"title": ""
},
{
"docid": "1f333e1dbeec98d3733dd78dfd669933",
"text": "Background and objectives: Food poisoning has been always a major concern in health system of every community and cream-filled products are one of the most widespread food poisoning causes in humans. In present study, we examined the preservative effect of the cinnamon oil in cream-filled cakes. Methods: Antimicrobial activity of Cinnamomum verum J. Presl (Cinnamon) bark essential oil was examined against five food-borne pathogens (Staphylococcus aureus, Escherichia coli, Candida albicans, Bacillus cereus and Salmonella typhimurium) to investigate its potential for use as a natural preservative in cream-filled baked goods. Chemical constituents of the oil were determined by gas chromatography/mass spectrometry. For evaluation of preservative sufficiency of the oil, pathogens were added to cream-filled cakes manually and 1 μL/mL of the essential oil was added to all samples except the blank. Results: Chemical constituents of the oil were determined by gas chromatography/mass spectrometry and twenty five components were identified where cinnamaldehyde (79.73%), linalool (4.08%), cinnamaldehyde para-methoxy (2.66%), eugenol (2.37%) and trans-caryophyllene (2.05%) were the major constituents. Cinnamon essential oil showed strong antimicrobial activity against selected pathogens in vitro and the minimum inhibitory concentration values against all tested microorganisms were determined as 0.5 μL/disc except for S. aureus for which, the oil was not effective in tested concentrations. After baking, no observable microorganism was observed in all susceptible microorganisms count in 72h stored samples. Conclusion: It was concluded that by analysing the sensory quality of the preserved food, cinnamon oil may be considered as a natural preservative in food industry, especially for cream-filled cakes and",
"title": ""
},
{
"docid": "accda4f9cb11d92639cf2737c5e8fe78",
"text": "Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2-weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86, and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol.",
"title": ""
},
{
"docid": "ab4fce4bd35bd8dd749bf0357c4b14b6",
"text": "In this paper, we describe and analyze the performance of two iris-encoding techniques. The first technique is based on Principle Component Analysis (PCA) encoding method while the second technique is a combination of Principal Component Analysis with Independent Component Analysis (ICA) following it. Both techniques are applied globally. PCA and ICA are two well known methods used to process a variety of data. Though PCA has been used as a preprocessing step that reduces dimensions for obtaining ICA components for iris, it has never been analyzed in depth as an individual encoding method. In practice PCA and ICA are known as methods that extract global and fine features, respectively. It is shown here that when PCA and ICA methods are used to encode iris images, one of the critical steps required to achieve a good performance is compensation for rotation effect. We further study the effect of varying the image resolution level on the performance of the two encoding methods. The major motivation for this study is the cases in practice where images of the same or different irises taken at different distances have to be compared. The performance of encoding techniques is analyzed using the CASIA dataset. The original images are non-ideal and thus require a sequence of preprocessing steps prior to application of encoding methods. We plot a series of Receiver Operating Characteristics (ROCs) to demonstrate various effects on the performance of the iris-based recognition system implementing PCA and ICA encoding techniques.",
"title": ""
},
{
"docid": "48c157638090b3168b6fd3cb50780184",
"text": "Adverse reactions to drugs are among the most common causes of death in industrialized nations. Expensive clinical trials are not sufficient to uncover all of the adverse reactions a drug may cause, necessitating systems for post-marketing surveillance, or pharmacovigilance. These systems have typically relied on voluntary reporting by health care professionals. However, self-reported patient data has become an increasingly important resource, with efforts such as MedWatch from the FDA allowing reports directly from the consumer. In this paper, we propose mining the relationships between drugs and adverse reactions as reported by the patients themselves in user comments to health-related websites. We evaluate our system on a manually annotated set of user comments, with promising performance. We also report encouraging correlations between the frequency of adverse drug reactions found by our system in unlabeled data and the frequency of documented adverse drug reactions. We conclude that user comments pose a significant natural language processing challenge, but do contain useful extractable information which merits further exploration.",
"title": ""
},
{
"docid": "d6fe99533c66075ffb85faf7c70475f0",
"text": "Outlier detection has received significant attention in many applications, such as detecting credit card fraud or network intrusions. Most existing research focuses on numerical datasets, and cannot directly apply to categorical sets where there is little sense in calculating distances among data points. Furthermore, a number of outlier detection methods require quadratic time with respect to the dataset size and usually multiple dataset scans. These characteristics are undesirable for large datasets, potentially scattered over multiple distributed sites. In this paper, we introduce Attribute Value Frequency (A VF), a fast and scalable outlier detection strategy for categorical data. A VF scales linearly with the number of data points and attributes, and relies on a single data scan. AVF is compared with a list of representative outlier detection approaches that have not been contrasted against each other. Our proposed solution is experimentally shown to be significantly faster, and as effective in discovering outliers.",
"title": ""
},
{
"docid": "341e0b7d04b333376674dac3c0888f50",
"text": "Software archives contain historical information about the development process of a software system. Using data mining techniques rules can be extracted from these archives. In this paper we discuss how standard visualization techniques can be applied to interactively explore these rules. To this end we extended the standard visualization techniques for association rules and sequence rules to also show the hierarchical order of items. Clusters and outliers in the resulting visualizations provide interesting insights into the relation between the temporal development of a system and its static structure. As an example we look at the large software archive of the MOZILLA open source project. Finally we discuss what kind of regularities and anomalies we found and how these can then be leveraged to support software engineers.",
"title": ""
},
{
"docid": "2ea626f0e1c4dfa3d5a23c80d8fbf70c",
"text": "Although research studies in education show that use of technology can help student learning, its use is generally affected by certain barriers. In this paper, we first identify the general barriers typically faced by K-12 schools, both in the United States as well as other countries, when integrating technology into the curriculum for instructional purposes, namely: (a) resources, (b) institution, (c) subject culture, (d) attitudes and beliefs, (e) knowledge and skills, and (f) assessment. We then describe the strategies to overcome such barriers: (a) having a shared vision and technology integration plan, (b) overcoming the scarcity of resources, (c) changing attitudes and beliefs, (d) conducting professional development, and (e) reconsidering assessments. Finally, we identify several current knowledge gaps pertaining to the barriers and strategies of technology integration, and offer pertinent recommendations for future research.",
"title": ""
},
{
"docid": "455a6fe5862e3271ac00057d1b569b11",
"text": "Personalization technologies and recommender systems help online consumers avoid information overload by making suggestions regarding which information is most relevant to them. Most online shopping sites and many other applications now use recommender systems. Two new recommendation techniques leverage multicriteria ratings and improve recommendation accuracy as compared with single-rating recommendation approaches. Taking full advantage of multicriteria ratings in personalization applications requires new recommendation techniques. In this article, we propose several new techniques for extending recommendation technologies to incorporate and leverage multicriteria rating information.",
"title": ""
},
{
"docid": "160e3c3fc9e3a13c4ee961e453532fd1",
"text": "An encephalitis outbreak was investigated in Faridpur District, Bangladesh, in April-May 2004 to determine the cause of the outbreak and risk factors for disease. Biologic specimens were tested for Nipah virus. Surfaces were evaluated for Nipah virus contamination by using reverse transcription-PCR (RT-PCR). Thirty-six cases of Nipah virus illness were identified; 75% of case-patients died. Multiple peaks of illness occurred, and 33 case-patients had close contact with another Nipah virus patient before their illness. Results from a case-control study showed that contact with 1 patient carried the highest risk for infection (odds ratio 6.7, 95% confidence interval 2.9-16.8, p < 0.001). RT-PCR testing of environmental samples confirmed Nipah virus contamination of hospital surfaces. This investigation provides evidence for person-to-person transmission of Nipah virus. Capacity for person-to-person transmission increases the potential for wider spread of this highly lethal pathogen and highlights the need for infection control strategies for resource-poor settings.",
"title": ""
},
{
"docid": "3a0275d7834a6fb1359bb7d3bef14e97",
"text": "With the Internet of Things (IoT) becoming a major component of our daily life, understanding how to improve quality of service (QoS) in IoT networks is becoming a challenging problem. Currently most interaction between the IoT devices and the supporting back-end servers is done through large scale cloud data centers. However, with the exponential growth of IoT devices and the amount of data they produce, communication between \"things\" and cloud will be costly, inefficient, and in some cases infeasible. Fog computing serves as solution for this as it provides computation, storage, and networking resource for IoT, closer to things and users. One of the promising advantages of fog is reducing service delay for end user applications, whereas cloud provides extensive computation and storage capacity with a higher latency. Thus it is necessary to understand the interplay between fog computing and cloud, and to evaluate the effect of fog computing on the IoT service delay and QoS. In this paper we will introduce a general framework for IoT-fog-cloud applications, and propose a delay-minimizing policy for fog-capable devices that aims to reduce the service delay for IoT applications. We then develop an analytical model to evaluate our policy and show how the proposed framework helps to reduce IoT service delay.",
"title": ""
},
{
"docid": "31cf550d44266e967716560faeb30f2b",
"text": "The explosion in workload complexity and the recent slow-down in Moore’s law scaling call for new approaches towards efficient computing. Researchers are now beginning to use recent advances in machine learning in software optimizations, augmenting or replacing traditional heuristics and data structures. However, the space of machine learning for computer hardware architecture is only lightly explored. In this paper, we demonstrate the potential of deep learning to address the von Neumann bottleneck of memory performance. We focus on the critical problem of learning memory access patterns, with the goal of constructing accurate and efficient memory prefetchers. We relate contemporary prefetching strategies to n-gram models in natural language processing, and show how recurrent neural networks can serve as a drop-in replacement. On a suite of challenging benchmark datasets, we find that neural networks consistently demonstrate superior performance in terms of precision and recall. This work represents the first step towards practical neural-network based prefetching, and opens a wide range of exciting directions for machine learning in computer architecture research.",
"title": ""
},
{
"docid": "8c3fb435c46c8ff3c509a2bfeb6625d7",
"text": "The objective of this study was to quantify the electrode-tissue interface impedance of electrodes used for deep brain stimulation (DBS). We measured the impedance of DBS electrodes using electrochemical impedance spectroscopy in vitro in a carbonate- and phosphate-buffered saline solution and in vivo following acute implantation in the brain. The components of the impedance, including the series resistance (R(s)), the Faradaic resistance (R(f)) and the double layer capacitance (C(dl)), were estimated using an equivalent electrical circuit. Both R(f) and C(dl) decreased as the sinusoidal frequency was increased, but the ratio of the capacitive charge transfer to the Faradaic charge transfer was relatively insensitive to the change of frequency. R(f) decreased and C(dl) increased as the current density was increased, and above a critical current density the interface impedance became nonlinear. Thus, the magnitude of the interface impedance was strongly dependent on the intensity (pulse amplitude and duration) of stimulation. The temporal dependence and spatial non-uniformity of R(f) and C(dl) suggested that a distributed network, with each element of the network having dynamics tailored to a specific stimulus waveform, is required to describe adequately the impedance of the DBS electrode-tissue interface. Voltage transients to biphasic square current pulses were measured and suggested that the electrode-tissue interface did not operate in a linear range at clinically relevant current amplitudes, and that the assumption of the DBS electrode being ideally polarizable was not valid under clinical stimulating conditions.",
"title": ""
},
{
"docid": "89a1e91c2ab1393f28a6381ba94de12d",
"text": "In this paper, a simulation environment encompassing realistic propagation conditions and system parameters is employed in order to analyze the performance of future multigigabit indoor communication systems at tetrahertz frequencies. The influence of high-gain antennas on transmission aspects is investigated. Transmitter position for optimal signal coverage is also analyzed. Furthermore, signal coverage maps and achievable data rates are calculated for generic indoor scenarios with and without furniture for a variety of possible propagation conditions.",
"title": ""
},
{
"docid": "0251f38f48c470e2e04fb14fc7ba34b2",
"text": "The fast development of Internet of Things (IoT) and cyber-physical systems (CPS) has triggered a large demand of smart devices which are loaded with sensors collecting information from their surroundings, processing it and relaying it to remote locations for further analysis. The wide deployment of IoT devices and the pressure of time to market of device development have raised security and privacy concerns. In order to help better understand the security vulnerabilities of existing IoT devices and promote the development of low-cost IoT security methods, in this paper, we use both commercial and industrial IoT devices as examples from which the security of hardware, software, and networks are analyzed and backdoors are identified. A detailed security analysis procedure will be elaborated on a home automation system and a smart meter proving that security vulnerabilities are a common problem for most devices. Security solutions and mitigation methods will also be discussed to help IoT manufacturers secure their products.",
"title": ""
},
{
"docid": "cb702c48a242c463dfe1ac1f208acaa2",
"text": "In 2011, Lake Erie experienced the largest harmful algal bloom in its recorded history, with a peak intensity over three times greater than any previously observed bloom. Here we show that long-term trends in agricultural practices are consistent with increasing phosphorus loading to the western basin of the lake, and that these trends, coupled with meteorological conditions in spring 2011, produced record-breaking nutrient loads. An extended period of weak lake circulation then led to abnormally long residence times that incubated the bloom, and warm and quiescent conditions after bloom onset allowed algae to remain near the top of the water column and prevented flushing of nutrients from the system. We further find that all of these factors are consistent with expected future conditions. If a scientifically guided management plan to mitigate these impacts is not implemented, we can therefore expect this bloom to be a harbinger of future blooms in Lake Erie.",
"title": ""
},
{
"docid": "a506f3f6c401f83eaba830abb20c8fff",
"text": "The mechanisms governing the recruitment of functional glutamate receptors at nascent excitatory postsynapses following initial axon-dendrite contact remain unclear. We examined here the ability of neurexin/neuroligin adhesions to mobilize AMPA-type glutamate receptors (AMPARs) at postsynapses through a diffusion/trap process involving the scaffold molecule PSD-95. Using single nanoparticle tracking in primary rat and mouse hippocampal neurons overexpressing or lacking neuroligin-1 (Nlg1), a striking inverse correlation was found between AMPAR diffusion and Nlg1 expression level. The use of Nlg1 mutants and inhibitory RNAs against PSD-95 demonstrated that this effect depended on intact Nlg1/PSD-95 interactions. Furthermore, functional AMPARs were recruited within 1 h at nascent Nlg1/PSD-95 clusters assembled by neurexin-1β multimers, a process requiring AMPAR membrane diffusion. Triggering novel neurexin/neuroligin adhesions also caused a depletion of PSD-95 from native synapses and a drop in AMPAR miniature EPSCs, indicating a competitive mechanism. Finally, both AMPAR level at synapses and AMPAR-dependent synaptic transmission were diminished in hippocampal slices from newborn Nlg1 knock-out mice, confirming an important role of Nlg1 in driving AMPARs to nascent synapses. Together, these data reveal a mechanism by which membrane-diffusing AMPARs can be rapidly trapped at PSD-95 scaffolds assembled at nascent neurexin/neuroligin adhesions, in competition with existing synapses.",
"title": ""
},
{
"docid": "149073f577d0e1fb380ae395ff1ca0c5",
"text": "A complete kinematic model of the 5 DOF-Mitsubishi RV-M1 manipulator is presented in this paper. The forward kinematic model is based on the Modified Denavit-Hartenberg notation, and the inverse one is derived in closed form by fixing the orientation of the tool. A graphical interface is developed using MATHEMATICA software to illustrate the forward and inverse kinematics, allowing student or researcher to have hands-on of virtual graphical model that fully describe both the robot's geometry and the robot's motion in its workspace before to tackle any real task.",
"title": ""
},
{
"docid": "4d6e9bc0a8c55e65d070d1776e781173",
"text": "As electronic device feature sizes scale-down, the power consumed due to onchip communications as compared to computations will increase dramatically; likewise, the available bandwidth per computational operation will continue to decrease. Integrated photonics can offer savings in power and potential increase in bandwidth for onchip networks. Classical diffraction-limited photonics currently utilized in photonic integrated circuits (PIC) is characterized by bulky and inefficient devices compared to their electronic counterparts due to weak light matter interactions (LMI). Performance critical for the PIC is electro-optic modulators (EOM), whose performances depend inherently on enhancing LMIs. Current EOMs based on diffraction-limited optical modes often deploy ring resonators and are consequently bulky, photon-lifetime modulation limited, and power inefficient due to large electrical...",
"title": ""
}
] | scidocsrr |
71901f57a6acfafe99eb5e4efad3f2f5 | Vision-Based Autonomous Navigation System Using ANN and FSM Control | [
{
"docid": "c4feca5e27cfecdd2913e18cc7b7a21a",
"text": "one component of intelligent transportation systems, IV systems use sensing and intelligent algorithms to understand the vehicle’s immediate environment, either assisting the driver or fully controlling the vehicle. Following the success of information-oriented systems, IV systems will likely be the “next wave” for ITS, functioning at the control layer to enable the driver–vehicle “subsystem” to operate more effectively. This column provides a broad overview of applications and selected activities in this field. IV application areas",
"title": ""
}
] | [
{
"docid": "6f34ef57fcf0a2429e7dc2a3e56a99fd",
"text": "Service-Oriented Architecture (SOA) provides a flexible framework for service composition. Using standard-based protocols (such as SOAP and WSDL), composite services can be constructed by integrating atomic services developed independently. Algorithms are needed to select service components with various QoS levels according to some application-dependent performance requirements. We design a broker-based architecture to facilitate the selection of QoS-based services. The objective of service selection is to maximize an application-specific utility function under the end-to-end QoS constraints. The problem is modeled in two ways: the combinatorial model and the graph model. The combinatorial model defines the problem as a multidimension multichoice 0-1 knapsack problem (MMKP). The graph model defines the problem as a multiconstraint optimal path (MCOP) problem. Efficient heuristic algorithms for service processes of different composition structures are presented in this article and their performances are studied by simulations. We also compare the pros and cons between the two models.",
"title": ""
},
{
"docid": "caae0254ea28dad0abf2f65fcadc7971",
"text": "Deregulation within the financial service industries and the widespread acceptance of new technologies is increasing competition in the finance marketplace. Central to the business strategy of every financial service company is the ability to retain existing customers and reach new prospective customers. Data mining is adopted to play an important role in these efforts. In this paper, we present a data mining approach for analyzing retailing bank customer attrition. We discuss the challenging issues such as highly skewed data, time series data unrolling, leaker field detection etc, and the procedure of a data mining project for the attrition analysis for retailing bank customers. We use lift as a proper measure for attrition analysis and compare the lift of data mining models of decision tree, boosted naïve Bayesian network, selective Bayesian network, neural network and the ensemble of classifiers of the above methods. Some interesting findings are reported. Our research work demonstrates the effectiveness and efficiency of data mining in attrition analysis for retailing bank.",
"title": ""
},
{
"docid": "6fd3f4ab064535d38c01f03c0135826f",
"text": "BACKGROUND\nThere is evidence of under-detection and poor management of pain in patients with dementia, in both long-term and acute care. Accurate assessment of pain in people with dementia is challenging and pain assessment tools have received considerable attention over the years, with an increasing number of tools made available. Systematic reviews on the evidence of their validity and utility mostly compare different sets of tools. This review of systematic reviews analyses and summarises evidence concerning the psychometric properties and clinical utility of pain assessment tools in adults with dementia or cognitive impairment.\n\n\nMETHODS\nWe searched for systematic reviews of pain assessment tools providing evidence of reliability, validity and clinical utility. Two reviewers independently assessed each review and extracted data from them, with a third reviewer mediating when consensus was not reached. Analysis of the data was carried out collaboratively. The reviews were synthesised using a narrative synthesis approach.\n\n\nRESULTS\nWe retrieved 441 potentially eligible reviews, 23 met the criteria for inclusion and 8 provided data for extraction. Each review evaluated between 8 and 13 tools, in aggregate providing evidence on a total of 28 tools. The quality of the reviews varied and the reporting often lacked sufficient methodological detail for quality assessment. The 28 tools appear to have been studied in a variety of settings and with varied types of patients. The reviews identified several methodological limitations across the original studies. The lack of a 'gold standard' significantly hinders the evaluation of tools' validity. Most importantly, the samples were small providing limited evidence for use of any of the tools across settings or populations.\n\n\nCONCLUSIONS\nThere are a considerable number of pain assessment tools available for use with the elderly cognitive impaired population. However there is limited evidence about their reliability, validity and clinical utility. On the basis of this review no one tool can be recommended given the existing evidence.",
"title": ""
},
{
"docid": "1de5bb16d9304cbfc7c2854ea02f4e5c",
"text": "Language acquisition is one of the most fundamental human traits, and it is obviously the brain that undergoes the developmental changes. During the years of language acquisition, the brain not only stores linguistic information but also adapts to the grammatical regularities of language. Recent advances in functional neuroimaging have substantially contributed to systems-level analyses of brain development. In this Viewpoint, I review the current understanding of how the \"final state\" of language acquisition is represented in the mature brain and summarize new findings on cortical plasticity for second language acquisition, focusing particularly on the function of the grammar center.",
"title": ""
},
{
"docid": "25793a93fec7a1ccea0869252a8a0141",
"text": "Condition monitoring of induction motors is a fast emerging technology for online detection of incipient faults. It avoids unexpected failure of a critical system. Approximately 30-40% of faults of induction motors are stator faults. This work presents a comprehensive review of various stator faults, their causes, detection parameters/techniques, and latest trends in the condition monitoring technology. It is aimed at providing a broad perspective on the status of stator fault monitoring to researchers and application engineers using induction motors. A list of 183 research publications on the subject is appended for quick reference.",
"title": ""
},
{
"docid": "857132b27d87727454ec3019e52039ba",
"text": "In this paper we will introduce an ensemble of codes called irregular repeat-accumulate (IRA) codes. IRA codes are a generalization of the repeat-accumluate codes introduced in [1], and as such have a natural linear-time encoding algorithm. We shall prove that on the binary erasure channel, IRA codes can be decoded reliably in linear time, using iterative sum-product decoding, at rates arbitrarily close to channel capacity. A similar result appears to be true on the AWGN channel, although we have no proof of this. We illustrate our results with numerical and experimental examples.",
"title": ""
},
{
"docid": "643599f9b0dcfd270f9f3c55567ed985",
"text": "OBJECTIVES\nTo describe a new first-trimester sonographic landmark, the retronasal triangle, which may be useful in the early screening for cleft palate.\n\n\nMETHODS\nThe retronasal triangle, i.e. the three echogenic lines formed by the two frontal processes of the maxilla and the palate visualized in the coronal view of the fetal face posterior to the nose, was evaluated prospectively in 100 consecutive normal fetuses at the time of routine first-trimester sonographic screening at 11 + 0 to 13 + 6 weeks' gestation. In a separate study of five fetuses confirmed postnatally as having a cleft palate, ultrasound images, including multiplanar three-dimensional views, were analyzed retrospectively to review the retronasal triangle.\n\n\nRESULTS\nNone of the fetuses evaluated prospectively was affected by cleft lip and palate. During their first-trimester scan, the retronasal triangle could not be identified in only two fetuses. Reasons for suboptimal visualization of this area included early gestational age at scanning (11 weeks) and persistent posterior position of the fetal face. Of the five cases with postnatal diagnosis of cleft palate, an abnormal configuration of the retronasal triangle was documented in all cases on analysis of digitally stored three-dimensional volumes.\n\n\nCONCLUSIONS\nThis study demonstrates the feasibility of incorporating evaluation of the retronasal triangle into the routine evaluation of the fetal anatomy at 11 + 0 to 13 + 6 weeks' gestation. Because fetuses with cleft palate have an abnormal configuration of the retronasal triangle, focused examination of the midface, looking for this area at the time of the nuchal translucency scan, may facilitate the early detection of cleft palate in the first trimester.",
"title": ""
},
{
"docid": "d83d672642531e1744afe77ed8379b90",
"text": "Customer churn prediction in Telecom Industry is a core research topic in recent years. A huge amount of data is generated in Telecom Industry every minute. On the other hand, there is lots of development in data mining techniques. Customer churn has emerged as one of the major issues in Telecom Industry. Telecom research indicates that it is more expensive to gain a new customer than to retain an existing one. In order to retain existing customers, Telecom providers need to know the reasons of churn, which can be realized through the knowledge extracted from Telecom data. This paper surveys the commonly used data mining techniques to identify customer churn patterns. The recent literature in the area of predictive data mining techniques in customer churn behavior is reviewed and a discussion on the future research directions is offered.",
"title": ""
},
{
"docid": "2a7dce77aaff56b810f4a80c32dc80ea",
"text": "Automatically segmenting and classifying clinical free text into sections is an important first step to automatic information retrieval, information extraction and data mining tasks, as it helps to ground the significance of the text within. In this work we describe our approach to automatic section segmentation of clinical records such as hospital discharge summaries and radiology reports, along with section classification into pre-defined section categories. We apply machine learning to the problems of section segmentation and section classification, comparing a joint (one-step) and a pipeline (two-step) approach. We demonstrate that our systems perform well when tested on three data sets, two for hospital discharge summaries and one for radiology reports. We then show the usefulness of section information by incorporating it in the task of extracting comorbidities from discharge summaries.",
"title": ""
},
{
"docid": "277cec4e1df1bfe15376cba3cd23fa85",
"text": "In this paper, we report the development, evaluation, and application of ultra-small low-power wireless sensor nodes for advancing animal husbandry, as well as for innovation of medical technologies. A radio frequency identification (RFID) chip with hybrid interface and neglectable power consumption was introduced to enable switching of ON/OFF and measurement mode after implantation. A wireless power transmission system with a maximum efficiency of 70% and an access distance of up to 5 cm was developed to allow the sensor node to survive for a duration of several weeks from a few minutes' remote charge. The results of field tests using laboratory mice and a cow indicated the high accuracy of the collected biological data and bio-compatibility of the package. As a result of extensive application of the above technologies, a fully solid wireless pH sensor and a surgical navigation system using artificial magnetic field and a 3D MEMS magnetic sensor are introduced in this paper, and the preliminary experimental results are presented and discussed.",
"title": ""
},
{
"docid": "127692e52e1dfb3d71be11e67b1013e6",
"text": "Internet social networks may be an abundant source of opportunities giving space to the “parallel world” which can and, in many ways, does surpass the realty. People share data about almost every aspect of their lives, starting with giving opinions and comments on global problems and events, friends tagging at locations up to the point of multimedia personalized content. Therefore, decentralized mini-campaigns about educational, cultural, political and sports novelties could be conducted. In this paper we have applied clustering algorithm to social network profiles with the aim of obtaining separate groups of people with different opinions about political views and parties. For network case, where some centroids are interconnected, we have implemented edge constraints into classical k-means algorithm. This approach enables fast and effective information analysis about the present state of affairs, but also discovers new tendencies in observed political sphere. All profile data, friendships, fanpage likes and statuses with interactions are collected by already developed software for neurolinguistics social network analysis “Symbols”.",
"title": ""
},
{
"docid": "a40e91ecac0f70e04cc1241797786e77",
"text": "In much of his writings on poverty, famines, and malnutrition, Amartya Sen argues that Democracy is the best way to avoid famines partly because of its ability to use a free press, and that the Indian experience since independence confirms this. His argument is partly empirical, but also relies on some a priori assumptions about human motivation. In his “Democracy as a Universal Value” he claims: Famines are easy to prevent if there is a serious effort to do so, and a democratic government, facing elections and criticisms from opposition parties and independent newspapers, cannot help but make such an effort. Not surprisingly, while India continued to have famines under British rule right up to independence ...they disappeared suddenly with the establishment of a multiparty democracy and a free press.",
"title": ""
},
{
"docid": "da04a904a236c9b4c3c335eb7c65246e",
"text": "BACKGROUND\nIdentifying the emotional state is helpful in applications involving patients with autism and other intellectual disabilities; computer-based training, human computer interaction etc. Electrocardiogram (ECG) signals, being an activity of the autonomous nervous system (ANS), reflect the underlying true emotional state of a person. However, the performance of various methods developed so far lacks accuracy, and more robust methods need to be developed to identify the emotional pattern associated with ECG signals.\n\n\nMETHODS\nEmotional ECG data was obtained from sixty participants by inducing the six basic emotional states (happiness, sadness, fear, disgust, surprise and neutral) using audio-visual stimuli. The non-linear feature 'Hurst' was computed using Rescaled Range Statistics (RRS) and Finite Variance Scaling (FVS) methods. New Hurst features were proposed by combining the existing RRS and FVS methods with Higher Order Statistics (HOS). The features were then classified using four classifiers - Bayesian Classifier, Regression Tree, K- nearest neighbor and Fuzzy K-nearest neighbor. Seventy percent of the features were used for training and thirty percent for testing the algorithm.\n\n\nRESULTS\nAnalysis of Variance (ANOVA) conveyed that Hurst and the proposed features were statistically significant (p < 0.001). Hurst computed using RRS and FVS methods showed similar classification accuracy. The features obtained by combining FVS and HOS performed better with a maximum accuracy of 92.87% and 76.45% for classifying the six emotional states using random and subject independent validation respectively.\n\n\nCONCLUSIONS\nThe results indicate that the combination of non-linear analysis and HOS tend to capture the finer emotional changes that can be seen in healthy ECG data. This work can be further fine tuned to develop a real time system.",
"title": ""
},
{
"docid": "10da9f0fd1be99878e280d261ea81ba3",
"text": "The fuzzy vault scheme is a cryptographic primitive being considered for storing fingerprint minutiae protected. A well-known problem of the fuzzy vault scheme is its vulnerability against correlation attack -based cross-matching thereby conflicting with the unlinkability requirement and irreversibility requirement of effective biometric information protection. Yet, it has been demonstrated that in principle a minutiae-based fuzzy vault can be secured against the correlation attack by passing the to-beprotected minutiae through a quantization scheme. Unfortunately, single fingerprints seem not to be capable of providing an acceptable security level against offline attacks. To overcome the aforementioned security issues, this paper shows how an implementation for multiple fingerprints can be derived on base of the implementation for single finger thereby making use of a Guruswami-Sudan algorithm-based decoder for verification. The implementation, of which public C++ source code can be downloaded, is evaluated for single and various multi-finger settings using the MCYTFingerprint-100 database and provides security enhancing features such as the possibility of combination with password and a slow-down mechanism.",
"title": ""
},
{
"docid": "c695f74a41412606e31c771ec9d2b6d3",
"text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.",
"title": ""
},
{
"docid": "12350d889ee7e66eeda886e1e3b03ff5",
"text": "With the rapid development of cloud storage, more and more data owners store their data on the remote cloud, that can reduce data owners’ overhead because the cloud server maintaining the data for them, e.g., storing, updating and deletion. However, that leads to data deletion becomes a security challenge because the cloud server may not delete the data honestly for financial incentives. Recently, plenty of research works have been done on secure data deletion. However, most of the existing methods can be summarized with the same protocol essentially, which called “one-bit-return” protocol: the storage server deletes the data and returns a one-bit result. The data owner has to believe the returned result because he cannot verify it. In this paper, we propose a novel blockchain-based data deletion scheme, which can make the deletion operation more transparent. In our scheme, the data owner can verify the deletion result no matter how malevolently the cloud server behaves. Besides, with the application of blockchain, the proposed scheme can achieve public verification without any trusted third party.",
"title": ""
},
{
"docid": "6d813684a21e3ccc7fb2e09c866be1f1",
"text": "Cross-site scripting (XSS) is a code injection attack that allows an attacker to execute malicious script in another user’s browser. Once the attacker gains control over the Website vulnerable to XSS attack, it can perform actions like cookie-stealing, malware-spreading, session-hijacking and malicious redirection. Malicious JavaScripts are the most conventional ways of performing XSS attacks. Although several approaches have been proposed, XSS is still a live problem since it is very easy to implement, but di cult to detect. In this paper, we propose an e↵ective approach for XSS attack detection. Our method focuses on balancing the load between client and the server. Our method performs an initial checking in the client side for vulnerability using divergence measure. If the suspicion level exceeds beyond a threshold value, then the request is discarded. Otherwise, it is forwarded to the proxy for further processing. In our approach we introduce an attribute clustering method supported by rank aggregation technique to detect confounded JavaScripts. The approach is validated using real life data.",
"title": ""
},
{
"docid": "c68633905f8bbb759c71388819e9bfa9",
"text": "An additional mechanical mechanism for a passive parallelogram-based exoskeleton arm-support is presented. It consists of several levers and joints and an attached extension coil spring. The additional mechanism has two favourable features. On the one hand it exhibits an almost iso-elastic behaviour whereby the lifting force of the mechanism is constant for a wide working range. Secondly, the value of the supporting force can be varied by a simple linear movement of a supporting joint. Furthermore a standard tension spring can be used to gain the desired behavior. The additional mechanism is a 4-link mechanism affixed to one end of the spring within the parallelogram arm-support. It has several geometrical parameters which influence the overall behaviour. A standard optimisation routine with constraints on the parameters is used to find an optimal set of geometrical parameters. Based on the optimized geometrical parameters a prototype was constructed and tested. It is a lightweight wearable system, with a weight of 1.9 kg. Detailed experiments reveal a difference between measured and calculated forces. These variations can be explained by a 60 % higher pre load force of the tension spring and a geometrical offset in the construction.",
"title": ""
},
{
"docid": "a964f8aeb9d48c739716445adc58e98c",
"text": "A passive aeration composting study was undertaken to investigate the effects of aeration pipe orientation (PO) and perforation size (PS) on some physico-chemical properties of chicken litter (chicken manure + sawdust) during composting. The experimental set up was a two-factor completely randomised block design with two pipe orientations: horizontal (Ho) and vertical (Ve), and three perforation sizes: 15, 25 and 35 mm diameter. The properties monitored during composting were pile temperature, moisture content (MC), pH, electrical conductivity (EC), total carbon (C(T)), total nitrogen (N(T)) and total phosphorus (P(T)). Moisture level in the piles was periodically replenished to 60% for efficient microbial activities. The results of the study showed that optimum composting conditions (thermophilic temperatures and sanitation requirements) were attained in all the piles. During composting, both PO and PS significantly affected pile temperature, moisture level, pH, C(T) loss and P(T) gain. EC was only affected by PO while N(T) was affected by PS. Neither PO nor PS had a significant effect on the C:N ratio. A vertical pipe was effective for uniform air distribution, hence, uniform composting rate within the composting pile. The final values showed that PO of Ve and PS of 35 mm diameter resulted in the least loss in N(T). The PO of Ho was as effective as Ve in the conservation of C(T) and P(T). Similarly, the three PSs were equally effective in the conservation of C(T) and P(T). In conclusion, the combined effects of PO and PS showed that treatments Ve35 and Ve15 were the most effective in minimizing N(T) loss.",
"title": ""
},
{
"docid": "f2ad701c00cf7cff75ddb8eba073a408",
"text": "One of the high efficiency motors that were introduced to the industry in recent times is Line Start Permanent Magnet Synchronous Motor (LS-PMSM). Fault detection of LS-PMSM is one of interesting issues. This article presents a new technique for broken rotor bar detection based on the values of Mean and RMS features obtained from captured start-up current in the time domain. The extracted features were analyzed using analysis of variance method to predict the motor condition. Starting load condition and its interaction on detection of broken rotor bar were also investigated. The statistical evaluation of means for each feature at different conditions was performed using Tukey's method as post-hoc procedure. The result showed that the applied features could able to detect the broken rotor bar fault in LS-PMSMs.",
"title": ""
}
] | scidocsrr |
534d8debd1364fafb2acd2fe01e62619 | Cost-Efficient Strategies for Restraining Rumor Spreading in Mobile Social Networks | [
{
"docid": "d056e5ea017eb3e5609dcc978e589158",
"text": "In this paper we study and evaluate rumor-like methods for combating the spread of rumors on a social network. We model rumor spread as a diffusion process on a network and suggest the use of an \"anti-rumor\" process similar to the rumor process. We study two natural models by which these anti-rumors may arise. The main metrics we study are the belief time, i.e., the duration for which a person believes the rumor to be true and point of decline, i.e., point after which anti-rumor process dominates the rumor process. We evaluate our methods by simulating rumor spread and anti-rumor spread on a data set derived from the social networking site Twitter and on a synthetic network generated according to the Watts and Strogatz model. We find that the lifetime of a rumor increases if the delay in detecting it increases, and the relationship is at least linear. Further our findings show that coupling the detection and anti-rumor strategy by embedding agents in the network, we call them beacons, is an effective means of fighting the spread of rumor, even if these beacons do not share information.",
"title": ""
}
] | [
{
"docid": "37e644b7b2d47e6830e30ae191bc453c",
"text": "Technological forecasting is now poised to respond to the emerging needs of private and public sector organizations in the highly competitive global environment. The history of the subject and its variant forms, including impact assessment, national foresight studies, roadmapping, and competitive technological intelligence, shows how it has responded to changing institutional motivations. Renewed focus on innovation, attention to science-based opportunities, and broad social and political factors will bring renewed attention to technological forecasting in industry, government, and academia. Promising new tools are anticipated, borrowing variously from fields such as political science, computer science, scientometrics, innovation management, and complexity science. 2001 Elsevier Science Inc. Introduction Technological forecasting—its purpose, methods, terminology, and uses—will be shaped in the future, as in the past, by the needs of corporations and government agencies.1 These have a continual pressing need to anticipate and cope with the direction and rate of technological change. The future of technological forecasting will also depend on the views of the public and their elected representatives about technological progress, economic competition, and the government’s role in technological development. In the context of this article, “technological forecasting” (TF) includes several new forms—for example, national foresight studies, roadmapping, and competitive technological intelligence—that have evolved to meet the changing demands of user institutions. It also encompasses technology assessment (TA) or social impact analysis, which emphasizes the downstream effects of technology’s invention, innovation, and evolution. VARY COATES is associated with the Institute for Technology Assessment, Washington, DC. MAHMUD FAROQUE is with George Mason University, Fairfax, VA. RICHARD KLAVANS is with CRP, Philadelphia, PA. KOTY LAPID is with Softblock, Beer Sheba, Israel. HAROLD LINSTONE is with Portland State University. CARL PISTORIUS is with the University of Pretoria, South Africa. ALAN PORTER is with the Georgia Institute of Technology, Atlanta, GA. We also thank Joseph Coates and Joseph Martino for helpful critiques. 1 The term “technological forecasting” is used in this article to apply to all purposeful and systematic attempts to anticipate and understand the potential direction, rate, characteristics, and effects of technological change, especially invention, innovation, adoption, and use. No distinction is intended between “technological forecasting” “technology forecasting,” or “technology foresight,” except as specifically described in the text. Technological Forecasting and Social Change 67, 1–17 (2001) 2001 Elsevier Science Inc. All rights reserved. 0040-1625/01/$–see front matter 655 Avenue of the Americas, New York, NY 10010 PII S0040-1625(00)00122-0",
"title": ""
},
{
"docid": "fdbb5f67eb2f9b651c0d2e1cf8077923",
"text": "The periodical maintenance of railway systems is very important in terms of maintaining safe and comfortable transportation. In particular, the monitoring and diagnosis of faults in the pantograph catenary system are required to provide a transmission from the catenary line to the electric energy locomotive. Surface wear that is caused by the interaction between the pantograph and catenary and nonuniform distribution on the surface of a pantograph of the contact points can cause serious accidents. In this paper, a novel approach is proposed for image processing-based monitoring and fault diagnosis in terms of the interaction and contact points between the pantograph and catenary in a moving train. For this purpose, the proposed method consists of two stages. In the first stage, the pantograph catenary interaction has been modeled; the simulation results were given a failure analysis with a variety of scenarios. In the second stage, the contact points between the pantograph and catenary were detected and implemented in real time with image processing algorithms using actual video images. The pantograph surface for a fault analysis was divided into three regions: safe, dangerous, and fault. The fault analysis of the system was presented using the number of contact points in each region. The experimental results demonstrate the effectiveness, applicability, and performance of the proposed approach.",
"title": ""
},
{
"docid": "7c5ce3005c4529e0c34220c538412a26",
"text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.",
"title": ""
},
{
"docid": "062d366387e6161ba6faadc32c53e820",
"text": "Image processing has been proved to be effective tool for analysis in various fields and applications. Agriculture sector where the parameters like canopy, yield, quality of product were the important measures from the farmers' point of view. Many times expert advice may not be affordable, majority times the availability of expert and their services may consume time. Image processing along with availability of communication network can change the situation of getting the expert advice well within time and at affordable cost since image processing was the effective tool for analysis of parameters. This paper intends to focus on the survey of application of image processing in agriculture field such as imaging techniques, weed detection and fruit grading. The analysis of the parameters has proved to be accurate and less time consuming as compared to traditional methods. Application of image processing can improve decision making for vegetation measurement, irrigation, fruit sorting, etc.",
"title": ""
},
{
"docid": "7dfbb5e01383b5f50dbeb87d55ceb719",
"text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "fcf8649ff7c2972e6ef73f837a3d3f4d",
"text": "The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled people's autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility.",
"title": ""
},
{
"docid": "2dfad4f4b0d69085341dfb64d6b37d54",
"text": "Modern applications and progress in deep learning research have created renewed interest for generative models of text and of images. However, even today it is unclear what objective functions one should use to train and evaluate these models. In this paper we present two contributions. Firstly, we present a critique of scheduled sampling, a state-of-the-art training method that contributed to the winning entry to the MSCOCO image captioning benchmark in 2015. Here we show that despite this impressive empirical performance, the objective function underlying scheduled sampling is improper and leads to an inconsistent learning algorithm. Secondly, we revisit the problems that scheduled sampling was meant to address, and present an alternative interpretation. We argue that maximum likelihood is an inappropriate training objective when the end-goal is to generate natural-looking samples. We go on to derive an ideal objective function to use in this situation instead. We introduce a generalisation of adversarial training, and show how such method can interpolate between maximum likelihood training and our ideal training objective. To our knowledge this is the first theoretical analysis that explains why adversarial training tends to produce samples with higher perceived quality.",
"title": ""
},
{
"docid": "3654827519075eac6bfe5ee442c6d4b2",
"text": "We examined the relations among phonological awareness, music perception skills, and early reading skills in a population of 100 4- and 5-year-old children. Music skills were found to correlate significantly with both phonological awareness and reading development. Regression analyses indicated that music perception skills contributed unique variance in predicting reading ability, even when variance due to phonological awareness and other cognitive abilities (math, digit span, and vocabulary) had been accounted for. Thus, music perception appears to tap auditory mechanisms related to reading that only partially overlap with those related to phonological awareness, suggesting that both linguistic and nonlinguistic general auditory mechanisms are involved in reading.",
"title": ""
},
{
"docid": "7843fb4bbf2e94a30c18b359076899ab",
"text": "In the area of magnetic resonance imaging (MRI), an extensive range of non-linear reconstruction algorithms has been proposed which can be used with general Fourier subsampling patterns. However, the design of these subsampling patterns has typically been considered in isolation from the reconstruction rule and the anatomy under consideration. In this paper, we propose a learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings. Our learning algorithm has access to a representative set of training signals, and searches for a sampling pattern that performs well on average for the signals in this set. We present a novel parameter-free greedy mask selection method and show it to be effective for a variety of reconstruction rules and performance metrics. Moreover, we also support our numerical findings by providing a rigorous justification of our framework via statistical learning theory.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "627aee14031293785224efdb7bac69f0",
"text": "Data on characteristics of metal-oxide surge arresters indicates that for fast front surges, those with rise times less than 8μs, the peak of the voltage wave occurs before the peak of the current wave and the residual voltage across the arrester increases as the time to crest of the arrester discharge current decreases. Several models have been proposed to simulate this frequency-dependent characteristic. These models differ in the calculation and adjustment of their parameters. In the present paper, a simulation of metal oxide surge arrester (MOSA) dynamic behavior during fast electromagnetic transients on power systems is done. Some models proposed in the literature are used. The simulations are performed with the Alternative Transients Program (ATP) version of Electromagnetic Transient Program (EMTP) to evaluate some metal oxide surge arrester models and verify their accuracy.",
"title": ""
},
{
"docid": "94b84ed0bb69b6c4fc7a268176146eea",
"text": "We consider the problem of representing image matrices with a set of basis functions. One common solution for that problem is to first transform the 2D image matrices into 1D image vectors and then to represent those 1D image vectors with eigenvectors, as done in classical principal component analysis. In this paper, we adopt a natural representation for the 2D image matrices using eigenimages, which are 2D matrices with the same size of original images and can be directly computed from original 2D image matrices. We discuss how to compute those eigenimages effectively. Experimental result on ORL image database shows the advantages of eigenimages method in representing the 2D images.",
"title": ""
},
{
"docid": "2e5ce96ba3c503704a9152ae667c24ec",
"text": "We use methods of classical and quantum mechanics for mathematical modeling of price dynamics at the financial market. The Hamiltonian formalism on the price/price-change phase space is used to describe the classical-like evolution of prices. This classical dynamics of prices is determined by ”hard” conditions (natural resources, industrial production, services and so on). These conditions as well as ”hard” relations between traders at the financial market are mathematically described by the classical financial potential. At the real financial market ”hard” conditions are not the only source of price changes. The information exchange and market psychology play important (and sometimes determining) role in price dynamics. We propose to describe this ”soft” financial factors by using the pilot wave (Bohmian) model of quantum mechanics. The theory of financial mental (or psychological) waves is used to take into account market psychology. The real trajectories of prices are determined (by the financial analogue of the second Newton law) by two financial potentials: classical-like (”hard” market conditions) and quantum-like (”soft” market conditions).",
"title": ""
},
{
"docid": "fa42192f3ffd08332e35b98019e622ff",
"text": "Human immunodeficiency virus 1 (HIV-1) and other retroviruses synthesize a DNA copy of their genome after entry into the host cell. Integration of this DNA into the host cell's genome is an essential step in the viral replication cycle. The viral DNA is synthesized in the cytoplasm and is associated with viral and cellular proteins in a large nucleoprotein complex. Before integration into the host genome can occur, this complex must be transported to the nucleus and must cross the nuclear envelope. This Review summarizes our current knowledge of how this journey is accomplished.",
"title": ""
},
{
"docid": "9b17dd1fc2c7082fa8daecd850fab91c",
"text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.",
"title": ""
},
{
"docid": "a757624e5fd2d4a364f484d55a430702",
"text": "The main challenge in P2P computing is to design and implement a robust and scalable distributed system composed of inexpensive, individually unreliable computers in unrelated administrative domains. The participants in a typical P2P system might include computers at homes, schools, and businesses, and can grow to several million concurrent participants.",
"title": ""
},
{
"docid": "6149a6aaa9c39a1e02ab8fbe64fcb62b",
"text": "The thoracic diaphragm is a dome-shaped septum, composed of muscle surrounding a central tendon, which separates the thoracic and abdominal cavities. The function of the diaphragm is to expand the chest cavity during inspiration and to promote occlusion of the gastroesophageal junction. This article provides an overview of the normal anatomy of the diaphragm.",
"title": ""
},
{
"docid": "6524efda795834105bae7d65caf15c53",
"text": "PURPOSE\nThis paper examines respondents' relationship with work following a stroke and explores their experiences including the perceived barriers to and facilitators of a return to employment.\n\n\nMETHOD\nOur qualitative study explored the experiences and recovery of 43 individuals under 60 years who had survived a stroke. Participants, who had experienced a first stroke less than three months before and who could engage in in-depth interviews, were recruited through three stroke services in South East England. Each participant was invited to take part in four interviews over an 18-month period and to complete a diary for one week each month during this period.\n\n\nRESULTS\nAt the time of their stroke a minority of our sample (12, 28% of the original sample) were not actively involved in the labour market and did not return to the work during the period that they were involved in the study. Of the 31 participants working at the time of the stroke, 13 had not returned to work during the period that they were involved in the study, six returned to work after three months and nine returned in under three months and in some cases virtually immediately after their stroke. The participants in our study all valued work and felt that working, especially in paid employment, was more desirable than not working. The participants who were not working at the time of their stroke or who had not returned to work during the period of the study also endorsed these views. However they felt that there were a variety of barriers and practical problems that prevented them working and in some cases had adjusted to a life without paid employment. Participants' relationship with work was influenced by barriers and facilitators. The positive valuations of work were modified by the specific context of stroke, for some participants work was a cause of stress and therefore potentially risky, for others it was a way of demonstrating recovery from stroke. The value and meaning varied between participants and this variation was related to past experience and biography. Participants who wanted to work indicated that their ability to work was influenced by the nature and extent of their residual disabilities. A small group of participants had such severe residual disabilities that managing everyday life was a challenge and that working was not a realistic prospect unless their situation changed radically. The remaining participants all reported residual disabilities. The extent to which these disabilities formed a barrier to work depended on an additional range of factors that acted as either barriers or facilitator to return to work. A flexible working environment and supportive social networks were cited as facilitators of return to paid employment.\n\n\nCONCLUSION\nParticipants in our study viewed return to work as an important indicator of recovery following a stroke. Individuals who had not returned to work felt that paid employment was desirable but they could not overcome the barriers. Individuals who returned to work recognized the barriers but had found ways of managing them.",
"title": ""
},
{
"docid": "1168c9e6ce258851b15b7e689f60e218",
"text": "Modern deep learning architectures produce highly accurate results on many challenging semantic segmentation datasets. State-of-the-art methods are, however, not directly transferable to real-time applications or embedded devices, since naïve adaptation of such systems to reduce computational cost (speed, memory and energy) causes a significant drop in accuracy. We propose ContextNet, a new deep neural network architecture which builds on factorized convolution, network compression and pyramid representation to produce competitive semantic segmentation in real-time with low memory requirement. ContextNet combines a deep network branch at low resolution that captures global context information efficiently with a shallow branch that focuses on highresolution segmentation details. We analyse our network in a thorough ablation study and present results on the Cityscapes dataset, achieving 66.1% accuracy at 18.3 frames per second at full (1024× 2048) resolution (23.2 fps with pipelined computations for streamed data).",
"title": ""
},
{
"docid": "71efff25f494a8b7a83099e7bdd9d9a8",
"text": "Background: Problems with intubation of the ampulla Vateri during diagnostic and therapeutic endoscopic maneuvers are a well-known feature. The ampulla Vateri was analyzed three-dimensionally to determine whether these difficulties have a structural background. Methods: Thirty-five human greater duodenal papillae were examined by light and scanning electron microscopy as well as immunohistochemically. Results: Histologically, highly vascularized finger-like mucosal folds project far into the lumen of the ampulla Vateri. The excretory ducts of seromucous glands containing many lysozyme-secreting Paneth cells open close to the base of the mucosal folds. Scanning electron microscopy revealed large mucosal folds inside the ampulla that continued into the pancreatic and bile duct, comparable to valves arranged in a row. Conclusions: Mucosal folds form pocket-like valves in the lumen of the ampulla Vateri. They allow a unidirectional flow of secretions into the duodenum and prevent reflux from the duodenum into the ampulla Vateri. Subepithelial mucous gland secretions functionally clean the valvular crypts and protect the epithelium. The arrangement of pocket-like mucosal folds may explain endoscopic difficulties experienced when attempting to penetrate the papilla of Vater during endoscopic retrograde cholangiopancreaticographic procedures.",
"title": ""
}
] | scidocsrr |
551c90304020aee22c4aff6a9ae6cf02 | Interpretable Representation Learning for Healthcare via Capturing Disease Progression through Time | [
{
"docid": "e7659e2c20e85f99996e4394fdc37a5c",
"text": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.",
"title": ""
}
] | [
{
"docid": "76049ed267e9327412d709014e8e9ed4",
"text": "A wireless massive MIMO system entails a large number (tens or hundreds) of base station antennas serving a much smaller number of users, with large gains in spectralefficiency and energy-efficiency compared with conventional MIMO technology. Until recently it was believed that in multicellular massive MIMO system, even in the asymptotic regime, as the number of service antennas tends to infinity, the performance is limited by directed inter-cellular interference. This interference results from unavoidable re-use of reverse-link training sequences (pilot contamination) by users in different cells. We devise a new concept that leads to the effective elimination of inter-cell interference in massive MIMO systems. This is achieved by outer multi-cellular precoding, which we call LargeScale Fading Precoding (LSFP). The main idea of LSFP is that each base station linearly combines messages aimed to users from different cells that re-use the same training sequence. Crucially, the combining coefficients depend only on the slowfading coefficients between the users and the base stations. Each base station independently transmits its LSFP-combined symbols using conventional linear precoding that is based on estimated fast-fading coefficients. Further, we derive estimates for downlink and uplink SINRs and capacity lower bounds for the case of massive MIMO systems with LSFP and a finite number of base station antennas.",
"title": ""
},
{
"docid": "4ab644ac13d8753aa6e747c4070e95e9",
"text": "This paper presents a framework for modeling the phase noise in complementary metal–oxide–semiconductor (CMOS) ring oscillators. The analysis considers both linear and nonlinear operations, and it includes both device noise and digital switching noise coupled through the power supply and substrate. In this paper, we show that fast rail-to-rail switching is required in order to achieve low phase noise. Further, flicker noise from the bias circuit can potentially dominate the phase noise at low offset frequencies. We define the effective factor for ring oscillators with large and nonlinear voltage swings and predict its increase for CMOS processes with smaller feature sizes. Our phase-noise analysis is validated via simulation and measurement results for ring oscillators fabricated in a number of CMOS processes.",
"title": ""
},
{
"docid": "b5d7c6a4d9551bf9b47b4e3754fb5911",
"text": "Discovering significant types of relations from the web is challenging because of its open nature. Unsupervised algorithms are developed to extract relations from a corpus without knowing the relations in advance, but most of them rely on tagging arguments of predefined types. Recently, a new algorithm was proposed to jointly extract relations and their argument semantic classes, taking a set of relation instances extracted by an open IE algorithm as input. However, it cannot handle polysemy of relation phrases and fails to group many similar (“synonymous”) relation instances because of the sparseness of features. In this paper, we present a novel unsupervised algorithm that provides a more general treatment of the polysemy and synonymy problems. The algorithm incorporates various knowledge sources which we will show to be very effective for unsupervised extraction. Moreover, it explicitly disambiguates polysemous relation phrases and groups synonymous ones. While maintaining approximately the same precision, the algorithm achieves significant improvement on recall compared to the previous method. It is also very efficient. Experiments on a realworld dataset show that it can handle 14.7 million relation instances and extract a very large set of relations from the web.",
"title": ""
},
{
"docid": "27745116e5c05802bda2bc6dc548cce6",
"text": "Recently, many researchers have attempted to classify Facial Attributes (FAs) by representing characteristics of FAs such as attractiveness, age, smiling and so on. In this context, recent studies have demonstrated that visual FAs are a strong background for many applications such as face verification, face search and so on. However, Facial Attribute Classification (FAC) in a wide range of attributes based on the regression representation -predicting of FAs as real-valued labelsis still a significant challenge in computer vision and psychology. In this paper, a regression model formulation is proposed for FAC in a wide range of FAs (e.g. 73 FAs). The proposed method accommodates real-valued scores to the probability of what percentage of the given FAs is present in the input image. To this end, two simultaneous dictionary learning methods are proposed to learn the regression and identity feature dictionaries simultaneously. Accordingly, a multi-level feature extraction is proposed for FAC. Then, four regression classification methods are proposed using a regression model formulated based on dictionary learning, SRC and CRC. Convincing results are",
"title": ""
},
{
"docid": "35bc2da7f6a3e18f831b4560fba7f94d",
"text": "findings All countries—developing and developed alike—find it difficult to stay competitive without inflows of foreign direct investment (FDI). FDI brings to host countries not only capital, productive facilities, and technology transfers, but also employment, new job skills and management expertise. These ingredients are particularly important in the case of Russia today, where the pressure for firms to compete with each other remains low. With blunted incentives to become efficient, due to interregional barriers to trade, weak exercise of creditor rights and administrative barriers to new entrants—including foreign invested firms—Russian enterprises are still in the early stages of restructuring. This paper argues that the policy regime governing FDI in the Russian Federation is still characterized by the old paradigm of FDI, established before the Second World War and seen all over the world during the 1950s and 1960s. In this paradigm there are essentially only two motivations for foreign direct investment: access to inputs for production, and access to markets for outputs. These kinds of FDI are useful, but often based either on exports that exploit cheap labor or natural resources, or else aimed at protected local markets and not necessarily at world standards for price and quality. The fact is that Russia is getting relatively small amounts of these types of FDI, and almost none of the newer, more efficient kind—characterized by state-of-the-art technology and world-class competitive production linked to dynamic global (or regional) markets. The paper notes that Russia should phase out the three core pillars of the current FDI policy regime-(i) all existing high tariffs and non-tariff protection for the domestic market; (ii) tax preferences for foreign investors (including those offered in Special Economic Zones), which bring few benefits (in terms of increased FDI) but engender costs (in terms of foregone fiscal revenue); and (iii) the substantial number of existing restrictions on FDI (make them applicable only to a limited number of sectors and activities). This set of reforms would allow Russia to switch to a modern approach towards FDI. The paper suggests the following specific policy recommendations: (i) amend the newly enacted FDI law so as to give \" national treatment \" for both right of establishment and for post-establishment operations; abolish conditions that are inconsistent with the agreement on trade-related investment measures (TRIMs) of the WTO (such as local content restrictions); and make investor-State dispute resolution mechanisms more efficient, including giving foreign investors the opportunity to …",
"title": ""
},
{
"docid": "00547f45936c7cea4b7de95ec1e0fbcd",
"text": "With the emergence of the Internet of Things (IoT) and Big Data era, many applications are expected to assimilate a large amount of data collected from environment to extract useful information. However, how heterogeneous computing devices of IoT ecosystems can execute the data processing procedures has not been clearly explored. In this paper, we propose a framework which characterizes energy and performance requirements of the data processing applications across heterogeneous devices, from a server in the cloud and a resource-constrained gateway at edge. We focus on diverse machine learning algorithms which are key procedures for handling the large amount of IoT data. We build analytic models which automatically identify the relationship between requirements and data in a statistical way. The proposed framework also considers network communication cost and increasing processing demand. We evaluate the proposed framework on two heterogenous devices, a Raspberry Pi and a commercial Intel server. We show that the identified models can accurately estimate performance and energy requirements with less than error of 4.8% for both platforms. Based on the models, we also evaluate whether the resource-constrained gateway can process the data more efficiently than the server in the cloud. The results present that the less-powerful device can achieve better energy and performance efficiency for more than 50% of machine learning algorithms.",
"title": ""
},
{
"docid": "620642c5437dc26cac546080c4465707",
"text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1",
"title": ""
},
{
"docid": "e85e66b6ad6324a07ca299bf4f3cd447",
"text": "To date, the majority of ad hoc routing protocol research has been done using simulation only. One of the most motivating reasons to use simulation is the difficulty of creating a real implementation. In a simulator, the code is contained within a single logical component, which is clearly defined and accessible. On the other hand, creating an implementation requires use of a system with many components, including many that have little or no documentation. The implementation developer must understand not only the routing protocol, but all the system components and their complex interactions. Further, since ad hoc routing protocols are significantly different from traditional routing protocols, a new set of features must be introduced to support the routing protocol. In this paper we describe the event triggers required for AODV operation, the design possibilities and the decisions for our ad hoc on-demand distance vector (AODV) routing protocol implementation, AODV-UCSB. This paper is meant to aid researchers in developing their own on-demand ad hoc routing protocols and assist users in determining the implementation design that best fits their needs.",
"title": ""
},
{
"docid": "65849cfb115918dd264445e91698e868",
"text": "Handwritten character recognition is always a frontier area of research in the field of pattern recognition. There is a large demand for OCR on hand written documents in Image processing. Even though, sufficient studies have performed in foreign scripts like Arabic, Chinese and Japanese, only a very few work can be traced for handwritten character recognition mainly for the south Indian scripts. OCR system development for Indian script has many application areas like preserving manuscripts and ancient literatures written in different Indian scripts and making digital libraries for the documents. Feature extraction and classification are essential steps of character recognition process affecting the overall accuracy of the recognition system. This paper presents a brief overview of digital image processing techniques such as Feature Extraction, Image Restoration and Image Enhancement. A brief history of OCR and various approaches to character recognition is also discussed in this paper.",
"title": ""
},
{
"docid": "0b1a8b80b4414fa34d6cbb5ad1342ad7",
"text": "OBJECTIVE\nThe aim of the study was to evaluate the efficacy of topical 2% lidocaine gel in reducing pain and discomfort associated with nasogastric tube insertion (NGTI) and compare lidocaine to ordinary lubricant gel in the ease in carrying out the procedure.\n\n\nMETHODS\nThis prospective, randomized, double-blind, placebo-controlled, convenience sample trial was conducted in the emergency department of our tertiary care university-affiliated hospital. Five milliliters of 2% lidocaine gel or placebo lubricant gel were administered nasally to alert hemodynamically stable adult patients 5 minutes before undergoing a required NGTI. The main outcome measures were overall pain, nasal pain, discomfort (eg, choking, gagging, nausea, vomiting), and difficulty in performing the procedure. Standard comparative statistical analyses were used.\n\n\nRESULTS\nThe study cohort included 62 patients (65% males). Thirty-one patients were randomized to either lidocaine or placebo groups. Patients who received lidocaine reported significantly less intense overall pain associated with NGTI compared to those who received placebo (37 ± 28 mm vs 51 ± 26 mm on 100-mm visual analog scale; P < .05). The patients receiving lidocaine also had significantly reduced nasal pain (33 ± 29 mm vs 48 ± 27 mm; P < .05) and significantly reduced sensation of gagging (25 ± 30 mm vs 39 ± 24 mm; P < .05). However, conducting the procedure was significantly more difficult in the lidocaine group (2.1 ± 0.9 vs 1.4 ± 0.7 on 5-point Likert scale; P < .05).\n\n\nCONCLUSION\nLidocaine gel administered nasally 5 minutes before NGTI significantly reduces pain and gagging sensations associated with the procedure but is associated with more difficult tube insertion compared to the use of lubricant gel.",
"title": ""
},
{
"docid": "696320f53bb91db9a59a803ec5356727",
"text": "Ransomware is a type of malware that encrypts data or locks a device to extort a ransom. Recently, a variety of high-profile ransomware attacks have been reported, and many ransomware defense systems have been proposed. However, none specializes in resisting untargeted attacks such as those by remote desktop protocol (RDP) attack ransomware. To resolve this problem, this paper proposes a way to combat RDP ransomware attacks by trapping and tracing. It discovers and ensnares the attacker through a network deception environment and uses an auxiliary tracing technology to find the attacker, finally achieving the goal of deterring the ransomware attacker and countering the RDP attack ransomware. Based on cyber deception, an auxiliary ransomware traceable system called RansomTracer is introduced in this paper. RansomTracer collects clues about the attacker by deploying monitors in the deception environment. Then, it automatically extracts and analyzes the traceable clues. Experiments and evaluations show that RansomTracer ensnares the adversary in the deception environment and improves the efficiency of clue analysis significantly. In addition, it is able to recognize the clues that identify the attacker and the screening rate reaches 98.34%.",
"title": ""
},
{
"docid": "7c6708511e8a19c7a984ccc4b5c5926e",
"text": "INTRODUCTION\nOtoplasty or correction of prominent ears, is one of most commonly performed surgeries in plastic surgery both in children and adults. Until nowadays, there have been more than 150 techniques described, but all with certain percentage of recurrence which varies from just a few up to 24.4%.\n\n\nOBJECTIVE\nThe authors present an otoplasty technique, a combination of Mustardé's original procedure with other techniques, which they have been using successfully in their everyday surgical practice for the last 9 years. The technique is based on posterior antihelical and conchal approach.\n\n\nMETHODS\nThe study included 102 patients (60 males and 42 females) operated on between 1999 and 2008. The age varied between 6 and 49 years. Each procedure was tailored to the aberrant anatomy which was analysed after examination. Indications and the operative procedure are described in step-by-step detail accompanied by drawings and photos taken during the surgery.\n\n\nRESULTS\nAll patients had bilateral ear deformity. In all cases was performed a posterior antihelical approach. The conchal reduction was done only when necessary and also through the same incision. The follow-up was from 1 to 5 years. There were no recurrent cases. A few minor complications were presented. Postoperative care, complications and advantages compared to other techniques are discussed extensively.\n\n\nCONCLUSION\nAll patients showed a high satisfaction rate with the final result and there was no necessity for further surgeries. The technique described in this paper is easy to reproduce even for young surgeons.",
"title": ""
},
{
"docid": "c9ea36d15ec23b678c23ad1ae8d976a9",
"text": "Privacy-preserving distributed machine learning has become more important than ever due to the high demand of large-scale data processing. This paper focuses on a class of machine learning problems that can be formulated as regularized empirical risk minimization, and develops a privacy-preserving learning approach to such problems. We use Alternating Direction Method of Multipliers (ADMM) to decentralize the learning algorithm, and apply Gaussian mechanisms to provide differential privacy guarantee. However, simply combining ADMM and local randomization mechanisms would result in a nonconvergent algorithm with poor performance even under moderate privacy guarantees. Besides, this intuitive approach requires a strong assumption that the objective functions of the learning problems should be differentiable and strongly convex. To address these concerns, we propose an improved ADMMbased Differentially Private distributed learning algorithm, DPADMM, where an approximate augmented Lagrangian function and Gaussian mechanisms with time-varying variance are utilized. We also apply the moments accountant method to bound the total privacy loss. Our theoretical analysis shows that DPADMM can be applied to a general class of convex learning problems, provides differential privacy guarantee, and achieves a convergence rate of O(1/ √ t), where t is the number of iterations. Our evaluations demonstrate that our approach can achieve good convergence and accuracy with moderate privacy guarantee.",
"title": ""
},
{
"docid": "2819e5fd171e76a6ed90b5f576259f39",
"text": "Moving obstacle avoidance is a fundamental requirement for any robot operating in real environments, where pedestrians, bicycles and cars are present. In this work, we design and validate a new approach that takes explicitly into account obstacle velocities, to achieve safe visual navigation in outdoor scenarios. A wheeled vehicle, equipped with an actuated pinhole camera and with a lidar, must follow a path represented by key images, without colliding with the obstacles. To estimate the obstacle velocities, we design a Kalman-based observer. Then, we adapt the tentacles designed in [1], to take into account the predicted obstacle positions. Finally, we validate our approach in a series of simulated and real experiments, showing that when the obstacle velocities are considered, the robot behaviour is safer, smoother, and faster than when it is not.",
"title": ""
},
{
"docid": "9f25bc7a2dadb2b8c0d54ac6e70e92e5",
"text": "Our research suggests that ML technologies will indeed grow more pervasive, but within job categories, what we define as the “suitability for machine learning” (SML) of work tasks varies greatly. We further propose that our SML rubric, illustrating the variability in task-level SML, can serve as an indicator for the potential reorganization of a job or an occupation because the set of tasks that form a job can be separated and re-bundled to redefine the job. Evaluating worker activities using our rubric, in fact, has the benefit of focusing on what ML can do instead of grouping all forms of automation together.",
"title": ""
},
{
"docid": "75a1832a5fdd9c48f565eb17e8477b4b",
"text": "We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.",
"title": ""
},
{
"docid": "9b9cff2b6d1313844b88bad5a2724c52",
"text": "A robot is usually an electro-mechanical machine that is guided by computer and electronic programming. Many robots have been built for manufacturing purpose and can be found in factories around the world. Designing of the latest inverted ROBOT which can be controlling using an APP for android mobile. We are developing the remote buttons in the android app by which we can control the robot motion with them. And in which we use Bluetooth communication to interface controller and android. Controller can be interfaced to the Bluetooth module though UART protocol. According to commands received from android the robot motion can be controlled. The consistent output of a robotic system along with quality and repeatability are unmatched. Pick and Place robots can be reprogrammable and tooling can be interchanged to provide for multiple applications.",
"title": ""
},
{
"docid": "bd9f584e7dbc715327b791e20cd20aa9",
"text": "We discuss learning a profile of user interests for recommending information sources such as Web pages or news articles. We describe the types of information available to determine whether to recommend a particular page to a particular user. This information includes the content of the page, the ratings of the user on other pages and the contents of these pages, the ratings given to that page by other users and the ratings of these other users on other pages and demographic information about users. We describe how each type of information may be used individually and then discuss an approach to combining recommendations from multiple sources. We illustrate each approach and the combined approach in the context of recommending restaurants.",
"title": ""
},
{
"docid": "07179377e99a40beffcb50ac039ca503",
"text": "RF-powered computers are small devices that compute and communicate using only the power that they harvest from RF signals. While existing technologies have harvested power from ambient RF sources (e.g., TV broadcasts), they require a dedicated gateway (like an RFID reader) for Internet connectivity. We present Wi-Fi Backscatter, a novel communication system that bridges RF-powered devices with the Internet. Specifically, we show that it is possible to reuse existing Wi-Fi infrastructure to provide Internet connectivity to RF-powered devices. To show Wi-Fi Backscatter's feasibility, we build a hardware prototype and demonstrate the first communication link between an RF-powered device and commodity Wi-Fi devices. We use off-the-shelf Wi-Fi devices including Intel Wi-Fi cards, Linksys Routers, and our organization's Wi-Fi infrastructure, and achieve communication rates of up to 1 kbps and ranges of up to 2.1 meters. We believe that this new capability can pave the way for the rapid deployment and adoption of RF-powered devices and achieve ubiquitous connectivity via nearby mobile devices that are Wi-Fi enabled.",
"title": ""
}
] | scidocsrr |
08783703748f4805351206e24d216c29 | Development of extensible open information extraction | [
{
"docid": "5f2818d3a560aa34cc6b3dbfd6b8f2cc",
"text": "Open Information Extraction (IE) systems extract relational tuples from text, without requiring a pre-specified vocabulary, by identifying relation phrases and associated arguments in arbitrary sentences. However, stateof-the-art Open IE systems such as REVERB and WOE share two important weaknesses – (1) they extract only relations that are mediated by verbs, and (2) they ignore context, thus extracting tuples that are not asserted as factual. This paper presents OLLIE, a substantially improved Open IE system that addresses both these limitations. First, OLLIE achieves high yield by extracting relations mediated by nouns, adjectives, and more. Second, a context-analysis step increases precision by including contextual information from the sentence in the extractions. OLLIE obtains 2.7 times the area under precision-yield curve (AUC) compared to REVERB and 1.9 times the AUC of WOE.",
"title": ""
},
{
"docid": "ebaeacf1c0eeb4a4818b4ac050e60b0c",
"text": "Open information extraction (Open IE) systems aim to obtain relation tuples with highly scalable extraction in portable across domain by identifying a variety of relation phrases and their arguments in arbitrary sentences. The first generation of Open IE learns linear chain models based on unlexicalized features such as Part-of-Speech (POS) or shallow tags to label the intermediate words between pair of potential arguments for identifying extractable relations. Open IE currently is developed in the second generation that is able to extract instances of the most frequently observed relation types such as Verb, Noun and Prep, Verb and Prep, and Infinitive with deep linguistic analysis. They expose simple yet principled ways in which verbs express relationships in linguistics such as verb phrase-based extraction or clause-based extraction. They obtain a significantly higher performance over previous systems in the first generation. In this paper, we describe an overview of two Open IE generations including strengths, weaknesses and application areas.",
"title": ""
}
] | [
{
"docid": "f271596a45a3104554bfe975ac8b4d6c",
"text": "In many regions of the visual system, the activity of a neuron is normalized by the activity of other neurons in the same region. Here we show that a similar normalization occurs during olfactory processing in the Drosophila antennal lobe. We exploit the orderly anatomy of this circuit to independently manipulate feedforward and lateral input to second-order projection neurons (PNs). Lateral inhibition increases the level of feedforward input needed to drive PNs to saturation, and this normalization scales with the total activity of the olfactory receptor neuron (ORN) population. Increasing total ORN activity also makes PN responses more transient. Strikingly, a model with just two variables (feedforward and total ORN activity) accurately predicts PN odor responses. Finally, we show that discrimination by a linear decoder is facilitated by two complementary transformations: the saturating transformation intrinsic to each processing channel boosts weak signals, while normalization helps equalize responses to different stimuli.",
"title": ""
},
{
"docid": "4538c5874872a0081593407d09e4c6fa",
"text": "PatternAttribution is a recent method, introduced in the vision domain, that explains classifications of deep neural networks. We demonstrate that it also generates meaningful interpretations in the language domain.",
"title": ""
},
{
"docid": "d4793c300bca8137d0da7ffdde75a72b",
"text": "The expectation-maximization (EM) method can facilitate maximizing likelihood functions that arise in statistical estimation problems. In the classical EM paradigm, one iteratively maximizes the conditional log-likelihood of a single unobservable complete data space, rather than maximizing the intractable likelihood function for the measured or incomplete data. EM algorithms update all parameters simultaneously, which has two drawbacks: 1) slow convergence, and 2) difficult maximization steps due to coupling when smoothness penalties are used. This paper describes the space-alternating generalized EM (SAGE) method, which updates the parameters sequentially by alternating between several small hidden-data spaces defined by the algorithm designer. We prove that the sequence of estimates monotonically increases the penalized-likelihood objective, we derive asymptotic convergence rates, and we provide sufficient conditions for monotone convergence in norm. Two signal processing applications illustrate the method: estimation of superimposed signals in Gaussian noise, and image reconstruction from Poisson measurements. In both applications, our SAGE algorithms easily accommodate smoothness penalties and converge faster than the EM algorithms.",
"title": ""
},
{
"docid": "3b54f22dd95670f618650f2d71e58068",
"text": "This paper proposes a novel multi-view human action recognition method by discovering and sharing common knowledge among different video sets captured in multiple viewpoints. To our knowledge, we are the first to treat a specific view as target domain and the others as source domains and consequently formulate the multi-view action recognition into the cross-domain learning framework. First, the classic bag-of-visual word framework is implemented for visual feature extraction in individual viewpoints. Then, we propose a cross-domain learning method with block-wise weighted kernel function matrix to highlight the saliency components and consequently augment the discriminative ability of the model. Extensive experiments are implemented on IXMAS, the popular multi-view action dataset. The experimental results demonstrate that the proposed method can consistently outperform the state of the arts.",
"title": ""
},
{
"docid": "8ad20ab4523e4cc617142a2de299dd4a",
"text": "OBJECTIVE\nTo determine the reliability and internal validity of the Hypospadias Objective Penile Evaluation (HOPE)-score, a newly developed scoring system assessing the cosmetic outcome in hypospadias.\n\n\nPATIENTS AND METHODS\nThe HOPE scoring system incorporates all surgically-correctable items: position of meatus, shape of meatus, shape of glans, shape of penile skin and penile axis. Objectivity was established with standardized photographs, anonymously coded patients, independent assessment by a panel, standards for a \"normal\" penile appearance, reference pictures and assessment of the degree of abnormality. A panel of 13 pediatric urologists completed 2 questionnaires, each consisting of 45 series of photographs, at an interval of at least 1 week. The inter-observer reliability, intra-observer reliability and internal validity were analyzed.\n\n\nRESULTS\nThe correlation coefficients for the HOPE-score were as follows: intra-observer reliability 0.817, inter-observer reliability 0.790, \"non-parametric\" internal validity 0.849 and \"parametric\" internal validity 0.842. These values reflect good reproducibility, sufficient agreement among observers and a valid measurement of differences and similarities in cosmetic appearance.\n\n\nCONCLUSIONS\nThe HOPE-score is the first scoring system that fulfills the criteria of a valid measurement tool: objectivity, reliability and validity. These favorable properties support its use as an objective outcome measure of the cosmetic result after hypospadias surgery.",
"title": ""
},
{
"docid": "5fa860515f72bca0667134bb61d2f695",
"text": "In the broad field of evaluation, the importance of stakeholders is often acknowledged and different categories of stakeholders are identified. Far less frequent is careful attention to analysis of stakeholders' interests, needs, concerns, power, priorities, and perspectives and subsequent application of that knowledge to the design of evaluations. This article is meant to help readers understand and apply stakeholder identification and analysis techniques in the design of credible evaluations that enhance primary intended use by primary intended users. While presented using a utilization-focused-evaluation (UFE) lens, the techniques are not UFE-dependent. The article presents a range of the most relevant techniques to identify and analyze evaluation stakeholders. The techniques are arranged according to their ability to inform the process of developing and implementing an evaluation design and of making use of the evaluation's findings.",
"title": ""
},
{
"docid": "f19f6c8caec01e3ca9c14981c0ea05fa",
"text": "Non-invasive cuff-less Blood Pressure (BP) estimation from Photoplethysmogram (PPG) is a well known challenge in the field of affordable healthcare. This paper presents a set of improvements over an existing method that estimates BP using 2-element Windkessel model from PPG signal. A noisy PPG corpus is collected using fingertip pulse oximeter, from two different locations in India. Exhaustive pre-processing techniques, such as filtering, baseline and topline correction are performed on the noisy PPG signals, followed by the selection of consistent cycles. Subsequently, the most relevant PPG features and demographic features are selected through Maximal Information Coefficient (MIC) score for learning the latent parameters controlling BP. Experimental results reveal that overall error in estimating BP lies within 10% of a commercially available digital BP monitoring device. Also, use of alternative latent parameters that incorporate the variation in cardiac output, shows a better trend following for abnormally low and high BP.",
"title": ""
},
{
"docid": "bd42bffcbb76d4aadde3df502326655a",
"text": "We present a novel class of actor-critic algorithms for actors consisting of sets of interacting modules. We present, analyze theoretically, and empirically evaluate an update rule for each module, which requires only local information: the module’s input, output, and the TD error broadcast by a critic. Such updates are necessary when computation of compatible features becomes prohibitively difficult and are also desirable to increase the biological plausibility of reinforcement learning methods.",
"title": ""
},
{
"docid": "eee5ffff364575afad1dcebbf169777b",
"text": "In this paper, we proposed the multiclass support vector machine (SVM) with the error-correcting output codes for the multiclass electroencephalogram (EEG) signals classification problem. The probabilistic neural network (PNN) and multilayer perceptron neural network were also tested and benchmarked for their performance on the classification of the EEG signals. Decision making was performed in two stages: feature extraction by computing the wavelet coefficients and the Lyapunov exponents and classification using the classifiers trained on the extracted features. The purpose was to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. Our research demonstrated that the wavelet coefficients and the Lyapunov exponents are the features which well represent the EEG signals and the multiclass SVM and PNN trained on these features achieved high classification accuracies",
"title": ""
},
{
"docid": "7456ceee02f50c9e92a665d362a9a419",
"text": "Visualization of dynamically changing networks (graphs) is a significant challenge for researchers. Previous work has experimentally compared animation, small multiples, and other techniques, and found trade-offs between these. One potential way to avoid such trade-offs is to combine previous techniques in a hybrid visualization. We present two taxonomies of visualizations of dynamic graphs: one of non-hybrid techniques, and one of hybrid techniques. We also describe a prototype, called DiffAni, that allows a graph to be visualized as a sequence of three kinds of tiles: diff tiles that show difference maps over some time interval, animation tiles that show the evolution of the graph over some time interval, and small multiple tiles that show the graph state at an individual time slice. This sequence of tiles is ordered by time and covers all time slices in the data. An experimental evaluation of DiffAni shows that our hybrid approach has advantages over non-hybrid techniques in certain cases.",
"title": ""
},
{
"docid": "e680f8b83e7a2137321cc644724827de",
"text": "A dual-band antenna is developed on a flexible Liquid Crystal Polymer (LCP) substrate for simultaneous operation at 2.45 and 5.8 GHz in high frequency Radio Frequency IDentification (RFID) systems. The response of the low profile double T-shaped slot antenna is preserved when the antenna is placed on platforms such as wood and cardboard, and when bent to conform to a cylindrical plastic box. Furthermore, experiments show that the antenna is still operational when placed at a distance of around 5cm from a metallic surface.",
"title": ""
},
{
"docid": "fd0cfef7be75a9aa98229c25ffaea864",
"text": "A capsule is a group of neurons whose activity vector represents the instantiation parameters of a specific type of entity such as an object or an object part. We use the length of the activity vector to represent the probability that the entity exists and its orientation to represent the instantiation parameters. Active capsules at one level make predictions, via transformation matrices, for the instantiation parameters of higher-level capsules. When multiple predictions agree, a higher level capsule becomes active. We show that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. To achieve these results we use an iterative routing-by-agreement mechanism: A lower-level capsule prefers to send its output to higher level capsules whose activity vectors have a big scalar product with the prediction coming from the lower-level capsule.",
"title": ""
},
{
"docid": "5f78f4f492b45eb5efd50d2cda340413",
"text": "This study examined the anatomy of the infrapatellar fat pad (IFP) in relation to knee pathology and surgical approaches. Eight embalmed knees were dissected via semicircular parapatellar incisions and each IFP was examined. Their volume, shape and constituent features were recorded. They were found in all knees and were constant in shape, consisting of a central body with medial and lateral extensions. The ligamentum mucosum was found inferior to the central body in all eight knees, while a fat tag was located superior to the central body in seven cases. Two clefts were consistently found on the posterior aspect of the IFP, a horizontal cleft below the ligamentum mucosum in six knees and a vertical cleft above, in seven cases. Our study found that the IFP is a constant structure in the knee joint, which may play a number of roles in knee joint function and pathology. Its significance in knee surgery is discussed.",
"title": ""
},
{
"docid": "fed23432144a6929c4f3442b10157771",
"text": "Knowledge has widely been acknowledged as one of the most important factors for corporate competitiveness, and we have witnessed an explosion of IS/IT solutions claiming to provide support for knowledge management (KM). A relevant question to ask, though, is how systems and technology intended for information such as the intranet can be able to assist in the managing of knowledge. To understand this, we must examine the relationship between information and knowledge. Building on Polanyi’s theories, I argue that all knowledge is tacit, and what can be articulated and made tangible outside the human mind is merely information. However, information and knowledge affect one another. By adopting a multi-perspective of the intranet where information, awareness, and communication are all considered, this interaction can best be supported and the intranet can become a useful and people-inclusive KM environment. 1. From philosophy to IT Ever since the ancient Greek period, philosophers have discussed what knowledge is. Early thinkers such as Plato and Aristotle where followed by Hobbes and Locke, Kant and Hegel, and into the 20th century by the likes of Wittgenstein, Popper, and Kuhn, to name but a few of the more prominent western philosophers. In recent years, we have witnessed a booming interest in knowledge also from other disciplines; organisation theorists, information system developers, and economists have all been swept away by the knowledge management avalanche. It seems, though, that the interest is particularly strong within the IS/IT community, where new opportunities to develop computer systems are welcomed. A plausible question to ask then is how knowledge relates to information technology (IT). Can IT at all be used to handle 0-7695-1435-9/02 $ knowledge, and if so, what sort of knowledge? What sorts of knowledge are there? What is knowledge? It seems we have little choice but to return to these eternal questions, but belonging to the IS/IT community, we should not approach knowledge from a philosophical perspective. As observed by Alavi and Leidner, the knowledge-based theory of the firm was never built on a universal truth of what knowledge really is but on a pragmatic interest in being able to manage organisational knowledge [2]. The discussion in this paper shall therefore be aimed at addressing knowledge from an IS/IT perspective, trying to answer two overarching questions: “What does the relationship between information and knowledge look like?” and “What role does an intranet have in this relationship?” The purpose is to critically review the contemporary KM literature in order to clarify the relationships between information and knowledge that commonly and implicitly are assumed within the IS/IT community. Epistemologically, this paper shall address the difference between tacit and explicit knowledge by accounting for some of the views more commonly found in the KM literature. Some of these views shall also be questioned, and the prevailing assump tion that tacit and explicit are two forms of knowledge shall be criticised by returning to Polanyi’s original work. My interest in the tacit side of knowledge, i.e. the aspects of knowledge that is omnipresent, taken for granted, and affecting our understanding without us being aware of it, has strongly influenced the content of this paper. Ontologywise, knowledge may be seen to exist on different levels, i.e. individual, group, organisation and inter-organisational [23]. Here, my primary interest is on the group and organisational levels. However, these two levels are obviously made up of individuals and we are thus bound to examine the personal aspects of knowledge as well, though be it from a macro perspective. 17.00 (c) 2002 IEEE 1 Proceedings of the 35th Hawaii International Conference on System Sciences 2002 2. Opposite traditions – and a middle way? When examining the knowledge literature, two separate tracks can be identified: the commodity view and the community view [35]. The commodity view of or the objective approach to knowledge as some absolute and universal truth has since long been the dominating view within science. Rooted in the positivism of the mid-19th century, the commodity view is still especially strong in the natural sciences. Disciples of this tradition understand knowledge as an artefact that can be handled in discrete units and that people may possess. Knowledge is a thing for which we can gain evidence, and knowledge as such is separated from the knower [33]. Metaphors such as drilling, mining, and harvesting are used to describe how knowledge is being managed. There is also another tradition that can be labelled the community view or the constructivist approach. This tradition can be traced back to Locke and Hume but is in its modern form rooted in the critique of the established quantitative approach to science that emerged primarily amongst social scientists during the 1960’s, and resulted in the publication of books by Garfinkel, Bourdieu, Habermas, Berger and Luckmann, and Glaser and Strauss. These authors argued that reality (and hence also knowledge) should be understood as socially constructed. According to this tradition, it is impossible to define knowledge universally; it can only be defined in practice, in the activities of and interactions between individuals. Thus, some understand knowledge to be universal and context-independent while others conceive it as situated and based on individual experiences. Maybe it is a little bit Author(s) Data Informa",
"title": ""
},
{
"docid": "e76afdc4a867789e6bcc92876a6b52af",
"text": "An Optimal fuzzy logic guidance (OFLG) law for a surface to air homing missile is introduced. The introduced approach is based on the well-known proportional navigation guidance (PNG) law. Particle Swarm Optimization (PSO) is used to optimize the of the membership functions' (MFs) parameters of the proposed design. The distribution of the MFs is obtained by minimizing a nonlinear constrained multi-objective optimization problem where; control effort and miss distance are treated as competing objectives. The performance of the introduced guidance law is compared with classical fuzzy logic guidance (FLG) law as well as PNG one. The simulation results show that OFLG performs better than other guidance laws. Moreover, the introduced design is shown to perform well with the existence of noisy measurements.",
"title": ""
},
{
"docid": "15fd626d5a6eb1258b8846137c62f97d",
"text": "Since leadership plays a vital role in democratic movements, understanding the nature of democratic leadership is essential. However, the definition of democratic leadership is unclear (Gastil, 1994). Also, little research has defined democratic leadership in the context of democratic movements. The leadership literature has paid no attention to democratic leadership in such movements, focusing on democratic leadership within small groups and organizations. This study proposes a framework of democratic leadership in democratic movements. The framework includes contexts, motivations, characteristics, and outcomes of democratic leadership. The study considers sacrifice, courage, symbolism, citizen participation, and vision as major characteristics in the display of democratic leadership in various political, social, and cultural contexts. Applying the framework to Nelson Mandela, Lech Walesa, and Dae Jung Kim; the study considers them as exemplary models of democratic leadership in democratic movements for achieving democracy. They have showed crucial characteristics of democratic leadership, offering lessons for democratic governance.",
"title": ""
},
{
"docid": "74ecfe68112ba6309ac355ba1f7b9818",
"text": "We present a novel approach to probabilistic time series forecasting that combines state space models with deep learning. By parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, our method retains desired properties of state space models such as data efficiency and interpretability, while making use of the ability to learn complex patterns from raw data offered by deep learning approaches. Our method scales gracefully from regimes where little training data is available to regimes where data from large collection of time series can be leveraged to learn accurate models. We provide qualitative as well as quantitative results with the proposed method, showing that it compares favorably to the state-of-the-art.",
"title": ""
},
{
"docid": "7100b0adb93419a50bbaeb1b7e32edf5",
"text": "Fractals have been very successful in quantifying the visual complexity exhibited by many natural patterns, and have captured the imagination of scientists and artists alike. Our research has shown that the poured patterns of the American abstract painter Jackson Pollock are also fractal. This discovery raises an intriguing possibility - are the visual characteristics of fractals responsible for the long-term appeal of Pollock's work? To address this question, we have conducted 10 years of scientific investigation of human response to fractals and here we present, for the first time, a review of this research that examines the inter-relationship between the various results. The investigations include eye tracking, visual preference, skin conductance, and EEG measurement techniques. We discuss the artistic implications of the positive perceptual and physiological responses to fractal patterns.",
"title": ""
},
{
"docid": "2cfc7eeae3259a43a24ef56932d8b27f",
"text": "This paper presents Platener, a system that allows quickly fabricating intermediate design iterations of 3D models, a process also known as low-fidelity fabrication. Platener achieves its speed-up by extracting straight and curved plates from the 3D model and substituting them with laser cut parts of the same size and thickness. Only the regions that are of relevance to the current design iteration are executed as full-detail 3D prints. Platener connects the parts it has created by automatically inserting joints. To help fast assembly it engraves instructions. Platener allows users to customize substitution results by (1) specifying fidelity-speed tradeoffs, (2) choosing whether or not to convert curved surfaces to plates bent using heat, and (3) specifying the conversion of individual plates and joints interactively. Platener is designed to best preserve the fidelity of func-tional objects, such as casings and mechanical tools, all of which contain a large percentage of straight/rectilinear elements. Compared to other low-fab systems, such as faBrickator and WirePrint, Platener better preserves the stability and functionality of such objects: the resulting assemblies have fewer parts and the parts have the same size and thickness as in the 3D model. To validate our system, we converted 2.250 3D models downloaded from a 3D model site (Thingiverse). Platener achieves a speed-up of 10 or more for 39.5% of all objects.",
"title": ""
}
] | scidocsrr |
0e9bebb749f36ccfc7349c86c70ce298 | Performance Modeling and Evaluation of Distributed Deep Learning Frameworks on GPUs | [
{
"docid": "92008a84a80924ec8c0ad1538da2e893",
"text": "Large-scale deep learning requires huge computational resources to train a multi-layer neural network. Recent systems propose using 100s to 1000s of machines to train networks with tens of layers and billions of connections. While the computation involved can be done more efficiently on GPUs than on more traditional CPU cores, training such networks on a single GPU is too slow and training on distributed GPUs can be inefficient, due to data movement overheads, GPU stalls, and limited GPU memory. This paper describes a new parameter server, called GeePS, that supports scalable deep learning across GPUs distributed among multiple machines, overcoming these obstacles. We show that GeePS enables a state-of-the-art single-node GPU implementation to scale well, such as to 13 times the number of training images processed per second on 16 machines (relative to the original optimized single-node code). Moreover, GeePS achieves a higher training throughput with just four GPU machines than that a state-of-the-art CPU-only system achieves with 108 machines.",
"title": ""
}
] | [
{
"docid": "1498977b6e68df3eeca6e25c550a5edd",
"text": "The Raven's Progressive Matrices (RPM) test is a commonly used test of intelligence. The literature suggests a variety of problem-solving methods for addressing RPM problems. For a graduate-level artificial intelligence class in Fall 2014, we asked students to develop intelligent agents that could address 123 RPM-inspired problems, essentially crowdsourcing RPM problem solving. The students in the class submitted 224 agents that used a wide variety of problem-solving methods. In this paper, we first report on the aggregate results of those 224 agents on the 123 problems, then focus specifically on four of the most creative, novel, and effective agents in the class. We find that the four agents, using four very different problem-solving methods, were all able to achieve significant success. This suggests the RPM test may be amenable to a wider range of problem-solving methods than previously reported. It also suggests that human computation might be an effective strategy for collecting a wide variety of methods for creative tasks.",
"title": ""
},
{
"docid": "fb4d8685bd880f44b489d7d13f5f36ed",
"text": "With the advancement in digitalization vast amount of Image data is uploaded and used via Internet in today’s world. With this revolution in uses of multimedia data, key problem in the area of Image processing, Computer vision and big data analytics is how to analyze, effectively process and extract useful information from such data. Traditional tactics to process such a data are extremely time and resource intensive. Studies recommend that parallel and distributed computing techniques have much more potential to process such data in efficient manner. To process such a complex task in efficient manner advancement in GPU based processing is also a candidate solution. This paper we introduce Hadoop-Mapreduce (Distributed system) and CUDA (Parallel system) based image processing. In our experiment using satellite images of different dimension we had compared performance or execution speed of canny edge detection algorithm. Performance is compared for CPU and GPU based Time Complexity.",
"title": ""
},
{
"docid": "09e9b51bdd42ec5fae7d332ce7543053",
"text": "This article investigates the cognitive strategies that people use to search computer displays. Several different visual layouts are examined: unlabeled layouts that contain multiple groups of items but no group headings, labeled layouts in which items are grouped and each group has a useful heading, and a target-only layout that contains just one item. A number of plausible strategies were proposed for each layout. Each strategy was programmed into the EPIC cognitive architecture, producing models that simulate the human visual-perceptual, oculomotor, and cognitive processing required for the task. The models generate search time predictions. For unlabeled layouts, the mean layout search times are predicted by a purely random search strategy, and the more detailed positional search times are predicted by a noisy systematic strategy. The labeled layout search times are predicted by a hierarchical strategy in which first the group labels are systematically searched, and then the contents of the target group. The target-only layout search times are predicted by a strategy in which the eyes move directly to the sudden appearance of the target. The models demonstrate that human visual search performance can be explained largely in terms of the cognitive strategy HUMAN–COMPUTER INTERACTION, 2004, Volume 19, pp. 183–223 Copyright © 2004, Lawrence Erlbaum Associates, Inc. Anthony Hornof is a computer scientist with interests in human–computer interaction, cognitive modeling, visual search, and eye tracking; he is an Assistant Professor in the Department of Computer and Information Science at the University of Oregon. that is used to coordinate the relevant perceptual and motor processes, a clear and useful visual hierarchy triggers a fundamentally different visual search strategy and effectively gives the user greater control over the visual navigation, and cognitive strategies will be an important component of a predictive visual search tool. The models provide insights pertaining to the visual-perceptual and oculomotor processes involved in visual search and contribute to the science base needed for predictive interface analysis. 184 HORNOF",
"title": ""
},
{
"docid": "acd93c6b041a975dcf52c7bafaf05b16",
"text": "Patients with carcinoma of the tongue including the base of the tongue who underwent total glossectomy in a period of just over ten years since January 1979 have been reviewed. Total glossectomy may be indicated as salvage surgery or as a primary procedure. The larynx may be preserved or may have to be sacrificed depending upon the site of the lesion. When the larynx is preserved the use of laryngeal suspension facilitates early rehabilitation and preserves the quality of life to a large extent. Cricopharyngeal myotomy seems unnecessary.",
"title": ""
},
{
"docid": "8bcc223389b7cc2ce2ef4e872a029489",
"text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.",
"title": ""
},
{
"docid": "9364e07801fc01e50d0598b61ab642aa",
"text": "Online learning represents a family of machine learning methods, where a learner attempts to tackle some predictive (or any type of decision-making) task by learning from a sequence of data instances one by one at each time. The goal of online learning is to maximize the accuracy/correctness for the sequence of predictions/decisions made by the online learner given the knowledge of correct answers to previous prediction/learning tasks and possibly additional information. This is in contrast to traditional batch or offline machine learning methods that are often designed to learn a model from the entire training data set at once. Online learning has become a promising technique for learning from continuous streams of data in many real-world applications. This survey aims to provide a comprehensive survey of the online machine learning literature through a systematic review of basic ideas and key principles and a proper categorization of different algorithms and techniques. Generally speaking, according to the types of learning tasks and the forms of feedback information, the existing online learning works can be classified into three major categories: (i) online supervised learning where full feedback information is always available, (ii) online learning with limited feedback, and (iii) online unsupervised learning where no feedback is available. Due to space limitation, the survey will be mainly focused on the first category, but also briefly cover some basics of the other two categories. Finally, we also discuss some open issues and attempt to shed light on potential future research directions in this field.",
"title": ""
},
{
"docid": "e442b7944062f6201e779aa1e7d6c247",
"text": "We present pigeo, a Python geolocation prediction tool that predicts a location for a given text input or Twitter user. We discuss the design, implementation and application of pigeo, and empirically evaluate it. pigeo is able to geolocate informal text and is a very useful tool for users who require a free and easy-to-use, yet accurate geolocation service based on pre-trained models. Additionally, users can train their own models easily using pigeo’s API.",
"title": ""
},
{
"docid": "82ca6a400bf287dc287df9fa751ddac2",
"text": "Research on ontology is becoming increasingly widespread in the computer science community, and its importance is being recognized in a multiplicity of research fields and application areas, including knowledge engineering, database design and integration, information retrieval and extraction. We shall use the generic term “information systems”, in its broadest sense, to collectively refer to these application perspectives. We argue in this paper that so-called ontologies present their own methodological and architectural peculiarities: on the methodological side, their main peculiarity is the adoption of a highly interdisciplinary approach, while on the architectural side the most interesting aspect is the centrality of the role they can play in an information system, leading to the perspective of ontology-driven information systems.",
"title": ""
},
{
"docid": "45b1cb6c9393128c9a9dcf9dbeb50778",
"text": "Bitcoin, a distributed, cryptographic, digital currency, gained a lot of media attention for being an anonymous e-cash system. But as all transactions in the network are stored publicly in the blockchain, allowing anyone to inspect and analyze them, the system does not provide real anonymity but pseudonymity. There have already been studies showing the possibility to deanonymize bitcoin users based on the transaction graph and publicly available data. Furthermore, users could be tracked by bitcoin exchanges or shops, where they have to provide personal information that can then be linked to their bitcoin addresses. Special bitcoin mixing services claim to obfuscate the origin of transactions and thereby increase the anonymity of its users. In this paper we evaluate three of these services – Bitcoin Fog, BitLaundry, and the Send Shared functionality of Blockchain.info – by analyzing the transaction graph. While Bitcoin Fog and Blockchain.info successfully mix our transaction, we are able to find a direct relation between the input and output transactions in the graph of BitLaundry.",
"title": ""
},
{
"docid": "30d0453033d3951f5b5faf3213eacb89",
"text": "Semantic mapping is the incremental process of “mapping” relevant information of the world (i.e., spatial information, temporal events, agents and actions) to a formal description supported by a reasoning engine. Current research focuses on learning the semantic of environments based on their spatial location, geometry and appearance. Many methods to tackle this problem have been proposed, but the lack of a uniform representation, as well as standard benchmarking suites, prevents their direct comparison. In this paper, we propose a standardization in the representation of semantic maps, by defining an easily extensible formalism to be used on top of metric maps of the environments. Based on this, we describe the procedure to build a dataset (based on real sensor data) for benchmarking semantic mapping techniques, also hypothesizing some possible evaluation metrics. Nevertheless, by providing a tool for the construction of a semantic map ground truth, we aim at the contribution of the scientific community in acquiring data for populating the dataset.",
"title": ""
},
{
"docid": "b7959c06c8057418762e12ef2c0ce2ce",
"text": "According to Bayesian theories in psychology and neuroscience, minds and brains are (near) optimal in solving a wide range of tasks. We challenge this view and argue that more traditional, non-Bayesian approaches are more promising. We make 3 main arguments. First, we show that the empirical evidence for Bayesian theories in psychology is weak. This weakness relates to the many arbitrary ways that priors, likelihoods, and utility functions can be altered in order to account for the data that are obtained, making the models unfalsifiable. It further relates to the fact that Bayesian theories are rarely better at predicting data compared with alternative (and simpler) non-Bayesian theories. Second, we show that the empirical evidence for Bayesian theories in neuroscience is weaker still. There are impressive mathematical analyses showing how populations of neurons could compute in a Bayesian manner but little or no evidence that they do. Third, we challenge the general scientific approach that characterizes Bayesian theorizing in cognitive science. A common premise is that theories in psychology should largely be constrained by a rational analysis of what the mind ought to do. We question this claim and argue that many of the important constraints come from biological, evolutionary, and processing (algorithmic) considerations that have no adaptive relevance to the problem per se. In our view, these factors have contributed to the development of many Bayesian \"just so\" stories in psychology and neuroscience; that is, mathematical analyses of cognition that can be used to explain almost any behavior as optimal.",
"title": ""
},
{
"docid": "f012c0d9fe795a738b3cd82cef94ef19",
"text": "Fraud detection is an industry where incremental gains in predictive accuracy can have large benefits for banks and customers. Banks adapt models to the novel ways in which “fraudsters” commit credit card fraud. They collect data and engineer new features in order to increase predictive power. This research compares the algorithmic impact on the predictive power across three supervised classification models: logistic regression, gradient boosted trees, and deep learning. This research also explores the benefits of creating features using domain expertise and feature engineering using an autoencoder—an unsupervised feature engineering method. These two methods of feature engineering combined with the direct mapping of the original variables create six different feature sets. Across these feature sets this research compares the aforementioned models. This research concludes that creating features using domain expertise offers a notable improvement in predictive power. Additionally, the autoencoder offers a way to reduce the dimensionality of the data and slightly boost predictive power.",
"title": ""
},
{
"docid": "a1cd5424dea527e365f038fce60fd821",
"text": "Producing literature reviews of complex evidence for policymaking questions is a challenging methodological area. There are several established and emerging approaches to such reviews, but unanswered questions remain, especially around how to begin to make sense of large data sets drawn from heterogeneous sources. Drawing on Kuhn's notion of scientific paradigms, we developed a new method-meta-narrative review-for sorting and interpreting the 1024 sources identified in our exploratory searches. We took as our initial unit of analysis the unfolding 'storyline' of a research tradition over time. We mapped these storylines by using both electronic and manual tracking to trace the influence of seminal theoretical and empirical work on subsequent research within a tradition. We then drew variously on the different storylines to build up a rich picture of our field of study. We identified 13 key meta-narratives from literatures as disparate as rural sociology, clinical epidemiology, marketing and organisational studies. Researchers in different traditions had conceptualised, explained and investigated diffusion of innovations differently and had used different criteria for judging the quality of empirical work. Moreover, they told very different over-arching stories of the progress of their research. Within each tradition, accounts of research depicted human characters emplotted in a story of (in the early stages) pioneering endeavour and (later) systematic puzzle-solving, variously embellished with scientific dramas, surprises and 'twists in the plot'. By first separating out, and then drawing together, these different meta-narratives, we produced a synthesis that embraced the many complexities and ambiguities of 'diffusion of innovations' in an organisational setting. We were able to make sense of seemingly contradictory data by systematically exposing and exploring tensions between research paradigms as set out in their over-arching storylines. In some traditions, scientific revolutions were identifiable in which breakaway researchers had abandoned the prevailing paradigm and introduced a new set of concepts, theories and empirical methods. We concluded that meta-narrative review adds value to the synthesis of heterogeneous bodies of literature, in which different groups of scientists have conceptualised and investigated the 'same' problem in different ways and produced seemingly contradictory findings. Its contribution to the mixed economy of methods for the systematic review of complex evidence should be explored further.",
"title": ""
},
{
"docid": "007791833b15bd3367c11bb17b7abf82",
"text": "When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.",
"title": ""
},
{
"docid": "f442354c5a99ece9571168648285f763",
"text": "A general closed-form subharmonic stability condition is derived for the buck converter with ripple-based constant on-time control and a feedback filter. The turn-on delay is included in the analysis. Three types of filters are considered: low-pass filter (LPF), phase-boost filter (PBF), and inductor current feedback (ICF) which changes the feedback loop frequency response like a filter. With the LPF, the stability region is reduced. With the PBF or ICF, the stability region is enlarged. Stability conditions are determined both for the case of a single output capacitor and for the case of two parallel-connected output capacitors having widely different time constants. The past research results related to the feedback filters become special cases. All theoretical predictions are verified by experiments.",
"title": ""
},
{
"docid": "3b5b3802d4863a6569071b346b65600d",
"text": "In vector space model (VSM), text representation is the task of transforming the content of a textual document into a vector in the term space so that the document could be recognized and classified by a computer or a classifier. Different terms (i.e. words, phrases, or any other indexing units used to identify the contents of a text) have different importance in a text. The term weighting methods assign appropriate weights to the terms to improve the performance of text categorization. In this study, we investigate several widely-used unsupervised (traditional) and supervised term weighting methods on benchmark data collections in combination with SVM and kNN algorithms. In consideration of the distribution of relevant documents in the collection, we propose a new simple supervised term weighting method, i.e. tf.rf, to improve the terms' discriminating power for text categorization task. From the controlled experimental results, these supervised term weighting methods have mixed performance. Specifically, our proposed supervised term weighting method, tf.rf, has a consistently better performance than other term weighting methods while other supervised term weighting methods based on information theory or statistical metric perform the worst in all experiments. On the other hand, the popularly used tf.idf method has not shown a uniformly good performance in terms of different data sets.",
"title": ""
},
{
"docid": "794bba509b6c609e4f9204d96bf5fe9c",
"text": "Power law distributions are an increasingly common model for computer science applications; for example, they have been used to describe file size distributions and inand out-degree distributions for the Web and Internet graphs. Recently, the similar lognormal distribution has also been suggested as an appropriate alternative model for file size distributions. In this paper, we briefly survey some of the history of these distributions, focusing on work in other fields. We find that several recently proposed models have antecedents in work from decades ago. We also find that lognormal and power law distributions connect quite naturally, and hence it is not surprising that lognormal distributions arise as a possible alternative to power law distributions.",
"title": ""
},
{
"docid": "f74a0c176352b8378d9f27fdf93763c9",
"text": "The future of user interfaces will be dominated by hand gestures. In this paper, we explore an intuitive hand gesture based interaction for smartphones having a limited computational capability. To this end, we present an efficient algorithm for gesture recognition with First Person View (FPV), which focuses on recognizing a four swipe model (Left, Right, Up and Down) for smartphones through single monocular camera vision. This can be used with frugal AR/VR devices such as Google Cardboard1 andWearality2 in building AR/VR based automation systems for large scale deployments, by providing a touch-less interface and real-time performance. We take into account multiple cues including palm color, hand contour segmentation, and motion tracking, which effectively deals with FPV constraints put forward by a wearable. We also provide comparisons of swipe detection with the existing methods under the same limitations. We demonstrate that our method outperforms both in terms of gesture recognition accuracy and computational time.",
"title": ""
},
{
"docid": "6cfdad2bb361713616dd2971026758a7",
"text": "We consider the problem of controlling a system with unknown, stochastic dynamics to achieve a complex, time-sensitive task. An example of this problem is controlling a noisy aerial vehicle with partially known dynamics to visit a pre-specified set of regions in any order while avoiding hazardous areas. In particular, we are interested in tasks which can be described by signal temporal logic (STL) specifications. STL is a rich logic that can be used to describe tasks involving bounds on physical parameters, continuous time bounds, and logical relationships over time and states. STL is equipped with a continuous measure called the robustness degree that measures how strongly a given sample path exhibits an STL property [4, 3]. This measure enables the use of continuous optimization problems to solve learning [7, 6] or formal synthesis problems [9] involving STL.",
"title": ""
},
{
"docid": "3b49747ef98ebcfa515fb10a22f08017",
"text": "This paper reports a qualitative study of thriving older people and illustrates the findings with design fiction. Design research has been criticized as \"solutionist\" i.e. solving problems that don't exist or providing \"quick fixes\" for complex social, political and environmental problems. We respond to this critique by presenting a \"solutionist\" board game used to generate design concepts. Players are given data cards and technology dice, they move around the board by pitching concepts that would support positive aging. We argue that framing concept design as a solutionist game explicitly foregrounds play, irony and the limitations of technological intervention. Three of the game concepts are presented as design fictions in the form of advertisements for products and services that do not exist. The paper argues that design fiction can help create a space for design beyond solutionism.",
"title": ""
}
] | scidocsrr |
c205d05981a16dc9ba2c9e74a009d8db | Neural Cryptanalysis of Classical Ciphers | [
{
"docid": "ff10bbde3ed18eea73375540135f99f4",
"text": "Recurrent neural networks (RNNs) represent the state of the art in translation, image captioning, and speech recognition. They are also capable of learning algorithmic tasks such as long addition, copying, and sorting from a set of training examples. We demonstrate that RNNs can learn decryption algorithms – the mappings from plaintext to ciphertext – for three polyalphabetic ciphers (Vigenere, Autokey, and Enigma). Most notably, we demonstrate that an RNN with a 3000-unit Long Short-Term Memory (LSTM) cell can learn the decryption function of the Enigma machine. We argue that our model learns efficient internal representations of these ciphers 1) by exploring activations of individual memory neurons and 2) by comparing memory usage across the three ciphers. To be clear, our work is not aimed at ’cracking’ the Enigma cipher. However, we do show that our model can perform elementary cryptanalysis by running known-plaintext attacks on the Vigenere and Autokey ciphers. Our results indicate that RNNs can learn algorithmic representations of black box polyalphabetic ciphers and that these representations are useful for cryptanalysis.",
"title": ""
},
{
"docid": "f8f1e4f03c6416e9d9500472f5e00dbe",
"text": "Template attack is the most common and powerful profiled side channel attack. It relies on a realistic assumption regarding the noise of the device under attack: the probability density function of the data is a multivariate Gaussian distribution. To relax this assumption, a recent line of research has investigated new profiling approaches mainly by applying machine learning techniques. The obtained results are commensurate, and in some particular cases better, compared to template attack. In this work, we propose to continue this recent line of research by applying more sophisticated profiling techniques based on deep learning. Our experimental results confirm the overwhelming advantages of the resulting new attacks when targeting both unprotected and protected cryptographic implementations.",
"title": ""
}
] | [
{
"docid": "2679d251d413adf208cb8b764ce55468",
"text": "We compare variations of string comparators based on the Jaro-Winkler comparator and edit distance comparator. We apply the comparators to Census data to see which are better classifiers for matches and nonmatches, first by comparing their classification abilities using a ROC curve based analysis, then by considering a direct comparison between two candidate comparators in record linkage results.",
"title": ""
},
{
"docid": "e0ec22fcdc92abe141aeb3fa67e9e55a",
"text": "A mobile wireless infrastructure-less network is a collection of wireless mobile nodes dynamically forming a temporary network without the use of any preexisting network infrastructure or centralized administration. However, the battery life of these nodes is very limited, if their battery power is depleted fully, then this result in network partition, so these nodes becomes a critical spot in the network. These critical nodes can deplete their battery power earlier because of excessive load and processing for data forwarding. These unbalanced loads turn to increase the chances of nodes failure, network partition and reduce the route lifetime and route reliability of the MANETs. Due to this, energy consumption issue becomes a vital research topic in wireless infrastructure -less networks. The energy efficient routing is a most important design criterion for MANETs. This paper focuses of the routing approaches are based on the minimization of energy consum ption of individual nodes and many other ways. This paper surveys and classifies numerous energy-efficient routing mechanisms proposed for wireless infrastructure-less networks. Also presents detailed comparative study of lager number of energy efficient/power aware routing protocol in MANETs. Aim of this paper to helps the new researchers and application developers to explore an innovative idea for designing more efficient routing protocols. Keywords— Ad hoc Network Routing, Load Distribution, Energy Eff icient, Power Aware, Protocol Stack",
"title": ""
},
{
"docid": "1ee1adcfd73e9685eab4e2abd28183c7",
"text": "We describe an algorithm for generating spherical mosaics from a collection of images acquired from a common optical center. The algorithm takes as input an arbitrary number of partially overlapping images, an adjacency map relating the images, initial estimates of the rotations relating each image to a specified base image, and approximate internal calibration information for the camera. The algorithm's output is a rotation relating each image to the base image, and revised estimates of the camera's internal parameters. Our algorithm is novel in the following respects. First, it requires no user input. (Our image capture instrumentation provides both an adjacency map for the mosaic, and an initial rotation estimate for each image.) Second, it optimizes an objective function based on a global correlation of overlapping image regions. Third, our representation of rotations significantly increases the accuracy of the optimization. Finally, our representation and use of adjacency information guarantees globally consistent rotation estimates. The algorithm has proved effective on a collection of nearly four thousand images acquired from more than eighty distinct optical centers. The experimental results demonstrate that the described global optimization strategy is superior to non-global aggregation of pair-wise correlation terms, and that it successfully generates high-quality mosaics despite significant error in initial rotation estimates.",
"title": ""
},
{
"docid": "1e31afb6d28b0489e67bb63d4dd60204",
"text": "An educational use of Pepper, a personal robot that was developed by SoftBank Robotics Corp. and Aldebaran Robotics SAS, is described. Applying the two concepts of care-receiving robot (CRR) and total physical response (TPR) into the design of an educational application using Pepper, we offer a scenario in which children learn together with Pepper at their home environments from a human teacher who gives a lesson from a remote classroom. This paper is a case report that explains the developmental process of the application that contains three educational programs that children can select in interacting with Pepper. Feedbacks and knowledge obtained from test trials are also described.",
"title": ""
},
{
"docid": "a112a01246256e38b563f616baf02cef",
"text": "This is the second of two papers describing a procedure for the three dimensional nonlinear timehistory analysis of steel framed buildings. An overview of the procedure and the theory for the panel zone element and the plastic hinge beam element are presented in Part I. In this paper, the theory for an efficient new element for modeling beams and columns in steel frames called the elastofiber element is presented, along with four illustrative examples. The elastofiber beam element is divided into three segments two end nonlinear segments and an interior elastic segment. The cross-sections of the end segments are subdivided into fibers. Associated with each fiber is a nonlinear hysteretic stress-strain law for axial stress and strain. This accounts for coupling of nonlinear material behavior between bending about the major and minor axes of the cross-section and axial deformation. Examples presented include large deflection of an elastic cantilever, cyclic loading of a cantilever beam, pushover analysis of a 20-story steel moment-frame building to collapse, and strong ground motion analysis of a 2-story unsymmetric steel moment-frame building. 1Post-Doctoral Scholar, Seismological Laboratory, MC 252-21, California Institute of Technology, Pasadena, CA91125. Email: [email protected] 2Professor, Civil Engineering and Applied Mechanics, MC 104-44, California Institute of Technology, Pasadena, CA-91125",
"title": ""
},
{
"docid": "429c6591223007b40ef7bffc5d9ac4db",
"text": "A compact dual-polarized double E-shaped patch antenna with high isolation for pico base station applications is presented in this communication. The proposed antenna employs a stacked configuration composed of two layers of substrate. Two modified E-shaped patches are printed orthogonally on both sides of the upper substrate. Two probes are used to excite the E-shaped patches, and each probe is connected to one patch separately. A circular patch is printed on the lower substrate to broaden the impedance bandwidth. Both simulated and measured results show that the proposed antenna has a port isolation higher than 30 dB over the frequency band of 2.5 GHz - 2.7 GHz, while the return loss is less than - 15 dB within the band. Moreover, stable radiation pattern with a peak gain of 6.8 dBi - 7.4 dBi is obtained within the band.",
"title": ""
},
{
"docid": "7adf46bb0a4ba677e58aee9968d06293",
"text": "BACKGROUND\nWork-family conflict is a type of interrole conflict that occurs as a result of incompatible role pressures from the work and family domains. Work role characteristics that are associated with work demands refer to pressures arising from excessive workload and time pressures. Literature suggests that work demands such as number of hours worked, workload, shift work are positively associated with work-family conflict, which, in turn is related to poor mental health and negative organizational attitudes. The role of social support has been an issue of debate in the literature. This study examined social support both as a moderator and a main effect in the relationship among work demands, work-to-family conflict, and satisfaction with job and life.\n\n\nOBJECTIVES\nThis study examined the extent to which work demands (i.e., work overload, irregular work schedules, long hours of work, and overtime work) were related to work-to-family conflict as well as life and job satisfaction of nurses in Turkey. The role of supervisory support in the relationship among work demands, work-to-family conflict, and satisfaction with job and life was also investigated.\n\n\nDESIGN AND METHODS\nThe sample was comprised of 243 participants: 106 academic nurses (43.6%) and 137 clinical nurses (56.4%). All of the respondents were female. The research instrument was a questionnaire comprising nine parts. The variables were measured under four categories: work demands, work support (i.e., supervisory support), work-to-family conflict and its outcomes (i.e., life and job satisfaction).\n\n\nRESULTS\nThe structural equation modeling results showed that work overload and irregular work schedules were the significant predictors of work-to-family conflict and that work-to-family conflict was associated with lower job and life satisfaction. Moderated multiple regression analyses showed that social support from the supervisor did not moderate the relationships among work demands, work-to-family conflict, and satisfaction with job and life. Exploratory analyses suggested that social support could be best conceptualized as the main effect directly influencing work-to-family conflict and job satisfaction.\n\n\nCONCLUSION\nNurses' psychological well-being and organizational attitudes could be enhanced by rearranging work conditions to reduce excessive workload and irregular work schedule. Also, leadership development programs should be implemented to increase the instrumental and emotional support of the supervisors.",
"title": ""
},
{
"docid": "97f748ee5667ee8c2230e07881574c22",
"text": "The most widely used signal in clinical practice is the ECG. ECG conveys information regarding the electrical function of the heart, by altering the shape of its constituent waves, namely the P, QRS, and T waves. Thus, the required tasks of ECG processing are the reliable recognition of these waves, and the accurate measurement of clinically important parameters measured from the temporal distribution of the ECG constituent waves. In this paper, we shall review some current trends on ECG pattern recognition. In particular, we shall review non-linear transformations of the ECG, the use of principal component analysis (linear and non-linear), ways to map the transformed data into n-dimensional spaces, and the use of neural networks (NN) based techniques for ECG pattern recognition and classification. The problems we shall deal with are the QRS/PVC recognition and classification, the recognition of ischemic beats and episodes, and the detection of atrial fibrillation. Finally, a generalised approach to the classification problems in n-dimensional spaces will be presented using among others NN, radial basis function networks (RBFN) and non-linear principal component analysis (NLPCA) techniques. The performance measures of the sensitivity and specificity of these algorithms will also be presented using as training and testing data sets from the MIT-BIH and the European ST-T databases.",
"title": ""
},
{
"docid": "f9468884fd24ff36b81fc2016a519634",
"text": "We study a new variant of Arikan's successive cancellation decoder (SCD) for polar codes. We first propose a new decoding algorithm on a new decoder graph, where the various stages of the graph are permuted. We then observe that, even though the usage of the permuted graph doesn't affect the encoder, it can significantly affect the decoding performance of a given polar code. The new permuted successive cancellation decoder (PSCD) typically exhibits a performance degradation, since the polar code is optimized for the standard SCD. We then present a new polar code construction rule matched to the PSCD and show their performance in simulations. For all rates we observe that the polar code matched to a given PSCD performs the same as the original polar code with the standard SCD. We also see that a PSCD with a reversal permutation can lead to a natural decoding order, avoiding the standard bit-reversal decoding order in SCD without any loss in performance.",
"title": ""
},
{
"docid": "101af3fab1f8abb4e2b75a067031048a",
"text": "Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust. (Trust; Organizing Principle; Structuring; Mobilizing) Introduction In the introduction to this special issue we observed that empirical research on trust was not keeping pace with theoretical developments in the field. We viewed this as a significant limitation and surmised that a special issue devoted to empirical research on trust would serve as a valuable vehicle for advancing the literature. In addition to the lack of empirical research, we would also make the observation that theories and evidence accumulating on trust in organizations is not well integrated and that the literature as a whole lacks coherence. At a general level, extant research provides “accumulating evidence that trust has a number of important benefits for organizations and their members” (Kramer 1999, p. 569). More specifically, Dirks and Ferrin’s (2001) review of the literature points to two distinct means through which trust generates these benefits. The dominant approach emphasizes the direct effects that trust has on important organizational phenomena such as: communication, conflict management, negotiation processes, satisfaction, and performance (both individual and unit). A second, less well studied, perspective points to the enabling effects of trust, whereby trust creates or enhances the conditions, such as positive interpretations of another’s behavior, that are conducive to obtaining organizational outcomes like cooperation and higher performance. The identification of these two perspectives provides a useful way of organizing the literature and generating insight into the mechanisms through which trust influences organizational outcomes. However, we are still left with a set of findings that have yet to be integrated on a theoretical level in a way that yields a set of generalizable propositions about the effects of trust on organizing. We believe this is due to the fact that research has, for the most part, embedded trust into existing theories. As a result, trust has been studied in a variety of different ways to address a wide range of organizational questions. This has yielded a diverse and eclectic body of knowledge about the relationship between trust and various organizational outcomes. At the same time, this approach has resulted in a somewhat fragmented view of the role of trust in an organizational context as a whole. In the remainder of this paper we begin to address the challenge of integrating the fragmented trust literature. While it is not feasible to develop a comprehensive framework that synthesizes the vast and diverse trust literature in a single paper, we draw together several key strands that relate to the organizational context. In particular, our paper aims to advance the literature by connecting the psychological and sociological microfoundations of trust with the macro-bases of organizing. BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle 92 ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 Specifically, we propose that reconceptualizing trust as an organizing principle is a fruitful way of viewing the role of trust and comprehending how research on trust advances our understanding of the organization and coordination of economic activity. While it is our goal to generate a framework that coalesces our thinking about the processes through which trust, as an organizing principle, affects organizational life, we are not Pollyannish: trust indubitably has a down side, which has been little researched. We begin by elaborating on the notion of an organizing principle and then move on to conceptualize trust from this perspective. Next, we describe a set of generalizable causal pathways through which trust affects organizing. We then use that framework to identify some exemplars of possible research questions and to point to possible downsides of trust. Organizing Principles As Ouchi (1980) discusses, a fundamental purpose of organizations is to attain goals that require coordinated efforts. Interdependence and uncertainty make goal attainment more difficult and create the need for organizational solutions. The subdivision of work implies that actors must exchange information and rely on others to accomplish organizational goals without having complete control over, or being able to fully monitor, others’ behaviors. Coordinating actions is further complicated by the fact that actors cannot assume that their interests and goals are perfectly aligned. Consequently, relying on others is difficult when there is uncertainty about their intentions, motives, and competencies. Managing interdependence among individuals, units, and activities in the face of behavioral uncertainty constitutes a key organizational challenge. Organizing principles represent a way of solving the problem of interdependence and uncertainty. An organizing principle is the logic by which work is coordinated and information is gathered, disseminated, and processed within and between organizations (Zander and Kogut 1995). An organizing principle represents a heuristic for how actors interpret and represent information and how they select appropriate behaviors and routines for coordinating actions. Examples of organizing principles include: market, hierarchy, and clan (Ouchi 1980). Other have referred to these organizing principles as authority, price, and norms (Adler 2001, Bradach and Eccles 1989, Powell 1990). Each of these principles operates on the basis of distinct mechanisms that orient, enable, and constrain economic behavior. For instance, authority as an organizing principle solves the problem of coordinating action in the face of interdependence and uncertainty by reallocating decision-making rights (Simon 1957, Coleman 1990). Price-based organizing principles revolve around the idea of making coordination advantageous for each party involved by aligning incentives (Hayek 1948, Alchian and Demsetz 1972). Compliance to internalized norms and the resulting self-control of the clan form is another organizing principle that has been identified as a means of achieving coordinated action (Ouchi 1980). We propose that trust is also an organizing principle and that conceptualizing trust in this way provides a powerful means of integrating the disparate research on trust and distilling generalizable implications for how trust affects organizing. We view trust as most closely related to the clan organizing principle. By definition clans rely on trust (Ouchi 1980). However, trust can and does occur in organizational contexts outside of clans. For instance, there are a variety of organizational arrangements where cooperation in mixed-motive situations depends on trust, such as in repeated strategic alliances (Gulati 1995), buyer-supplier relationships (Dyer and Chu this issue), and temporary groups in organizations (Meyerson et al. 1996). More generally, we believe that trust frequently operates in conjunction with other organizing principles. For instance, Dirks (2000) found that while authority is important for behaviors that can be observed or controlled, trust is important when there exists performance ambiguity or behaviors that cannot be observed or controlled. Because most organizations have a combination of behaviors that can and cannot be observed or controlled, authority and trust co-occur. More generally, we believe that mixed or plural forms are the norm, consistent with Bradach and Eccles (1989). In some situations, however, trust may be the primary organizing principle, such as when monitoring and formal controls are difficult and costly to use. In these cases, trust represents an efficient choice. In other situations, trust may be relied upon due to social, rather than efficiency, considerations. For instance, achieving a sense of personal belonging within a collectivity (Podolny and Barron 1997) and the desire to develop and maintain rewarding social attachments (Granovetter 1985) may serve as the impetus for relying on trust as an organizing principle. Trust as an Organizing Principle At a general level trust is the willingness to accept vulnerability based on positive expectations about another’s intentions or behaviors (Mayer et al. 1995, Rousseau et al. 1998). Because trust represents a positive assumption BILL MCEVILY, VINCENZO PERRONE, AND AKBAR ZAHEER Trust as an Organizing Principle ORGANIZATION SCIENCE/Vol. 14, No. 1, January–February 2003 93 about the motives and intentions of another party, it allows people to economize on information processing and safeguarding behaviors. By representing an expectation that others will act in a way that serves, or at least is not inimical to, one’s interests (Gambetta 1988), trust as a heuristic is a frame of reference that al",
"title": ""
},
{
"docid": "13897df01d4c03191dd015a04c3a5394",
"text": "Medical or Health related search queries constitute a significant portion of the total number of queries searched everyday on the web. For health queries, the authenticity or authoritativeness of search results is of utmost importance besides relevance. So far, research in automatic detection of authoritative sources on the web has mainly focused on a) link structure based approaches and b) supervised approaches for predicting trustworthiness. However, the aforementioned approaches have some inherent limitations. For example, several content farm and low quality sites artificially boost their link-based authority rankings by forming a syndicate of highly interlinked domains and content which is algorithmically hard to detect. Moreover, the number of positively labeled training samples available for learning trustworthiness is also limited when compared to the size of the web. In this paper, we propose a novel unsupervised approach to detect and promote authoritative domains in health segment using click-through data. We argue that standard IR metrics such as NDCG are relevance-centric and hence are not suitable for evaluating authority. We propose a new authority-centric evaluation metric based on side-by-side judgment of results. Using real world search query sets, we evaluate our approach both quantitatively and qualitatively and show that it succeeds in significantly improving the authoritativeness of results when compared to a standard web ranking baseline. ∗Corresponding Author",
"title": ""
},
{
"docid": "07570935aad8a481ea5e9d422c4f80ca",
"text": "Continuous modification of the protein composition at synapses is a driving force for the plastic changes of synaptic strength, and provides the fundamental molecular mechanism of synaptic plasticity and information storage in the brain. Studying synaptic protein turnover is not only important for understanding learning and memory, but also has direct implication for understanding pathological conditions like aging, neurodegenerative diseases, and psychiatric disorders. Proteins involved in synaptic transmission and synaptic plasticity are typically concentrated at synapses of neurons and thus appear as puncta (clusters) in immunofluorescence microscopy images. Quantitative measurement of the changes in puncta density, intensity, and sizes of specific proteins provide valuable information on their function in synaptic transmission, circuit development, synaptic plasticity, and synaptopathy. Unfortunately, puncta quantification is very labor intensive and time consuming. In this article, we describe a software tool designed for the rapid semi-automatic detection and quantification of synaptic protein puncta from 2D immunofluorescence images generated by confocal laser scanning microscopy. The software, dubbed as SynPAnal (for Synaptic Puncta Analysis), streamlines data quantification for puncta density and average intensity, thereby increases data analysis throughput compared to a manual method. SynPAnal is stand-alone software written using the JAVA programming language, and thus is portable and platform-free.",
"title": ""
},
{
"docid": "b4f82364c5c4900058f50325ccc9e4c4",
"text": "OBJECTIVE\nThis study reports the psychometric properties of the 24-item version of the Diabetes Knowledge Questionnaire (DKQ).\n\n\nRESEARCH DESIGN AND METHODS\nThe original 60-item DKQ was administered to 502 adult Mexican-Americans with type 2 diabetes who are part of the Starr County Diabetes Education Study. The sample was composed of 252 participants and 250 support partners. The subjects were randomly assigned to the educational and social support intervention (n = 250) or to the wait-listed control group (n = 252). A shortened 24-item version of the DKQ was derived from the original instrument after data collection was completed. Reliability was assessed by means of Cronbach's coefficient alpha. To determine validity, differentiation between the experimental and control groups was conducted at baseline and after the educational portion of the intervention.\n\n\nRESULTS\nThe 24-item version of the DKQ (DKQ-24) attained a reliability coefficient of 0.78, indicating internal consistency, and showed sensitivity to the intervention, suggesting construct validation.\n\n\nCONCLUSIONS\nThe DKQ-24 is a reliable and valid measure of diabetes-related knowledge that is relatively easy to administer to either English or Spanish speakers.",
"title": ""
},
{
"docid": "8b2b8eb2d16b28dac8ec8d4572b8db0e",
"text": "Combining meaning, memory, and development, the perennially popular topic of intuition can be approached in a new way. Fuzzy-trace theory integrates these topics by distinguishing between meaning-based gist representations, which support fuzzy (yet advanced) intuition, and superficial verbatim representations of information, which support precise analysis. Here, I review the counterintuitive findings that led to the development of the theory and its most recent extensions to the neuroscience of risky decision making. These findings include memory interference (worse verbatim memory is associated with better reasoning); nonnumerical framing (framing effects increase when numbers are deleted from decision problems); developmental decreases in gray matter and increases in brain connectivity; developmental reversals in memory, judgment, and decision making (heuristics and biases based on gist increase from childhood to adulthood, challenging conceptions of rationality); and selective attention effects that provide critical tests comparing fuzzy-trace theory, expected utility theory, and its variants (e.g., prospect theory). Surprising implications for judgment and decision making in real life are also discussed, notably, that adaptive decision making relies mainly on gist-based intuition in law, medicine, and public health.",
"title": ""
},
{
"docid": "fb58d6fe77092be4bce5dd0926c563de",
"text": "We present the Mind the Gap Model (MGM), an approach for interpretable feature extraction and selection. By placing interpretability criteria directly into the model, we allow for the model to both optimize parameters related to interpretability and to directly report a global set of distinguishable dimensions to assist with further data exploration and hypothesis generation. MGM extracts distinguishing features on real-world datasets of animal features, recipes ingredients, and disease co-occurrence. It also maintains or improves performance when compared to related approaches. We perform a user study with domain experts to show the MGM’s ability to help with dataset exploration.",
"title": ""
},
{
"docid": "6c221c4085c6868640c236b4dd72f777",
"text": "Resilience has been most frequently defined as positive adaptation despite adversity. Over the past 40 years, resilience research has gone through several stages. From an initial focus on the invulnerable or invincible child, psychologists began to recognize that much of what seems to promote resilience originates outside of the individual. This led to a search for resilience factors at the individual, family, community - and, most recently, cultural - levels. In addition to the effects that community and culture have on resilience in individuals, there is growing interest in resilience as a feature of entire communities and cultural groups. Contemporary researchers have found that resilience factors vary in different risk contexts and this has contributed to the notion that resilience is a process. In order to characterize the resilience process in a particular context, it is necessary to identify and measure the risk involved and, in this regard, perceived discrimination and historical trauma are part of the context in many Aboriginal communities. Researchers also seek to understand how particular protective factors interact with risk factors and with other protective factors to support relative resistance. For this purpose they have developed resilience models of three main types: \"compensatory,\" \"protective,\" and \"challenge\" models. Two additional concepts are resilient reintegration, in which a confrontation with adversity leads individuals to a new level of growth, and the notion endorsed by some Aboriginal educators that resilience is an innate quality that needs only to be properly awakened.The review suggests five areas for future research with an emphasis on youth: 1) studies to improve understanding of what makes some Aboriginal youth respond positively to risk and adversity and others not; 2) case studies providing empirical confirmation of the theory of resilient reintegration among Aboriginal youth; 3) more comparative studies on the role of culture as a resource for resilience; 4) studies to improve understanding of how Aboriginal youth, especially urban youth, who do not live in self-governed communities with strong cultural continuity can be helped to become, or remain, resilient; and 5) greater involvement of Aboriginal researchers who can bring a nonlinear world view to resilience research.",
"title": ""
},
{
"docid": "4c4bfcadd71890ccce9e58d88091f6b3",
"text": "With the dramatic growth of the game industry over the past decade, its rapid inclusion in many sectors of today’s society, and the increased complexity of games, game development has reached a point where it is no longer humanly possible to use only manual techniques to create games. Large parts of games need to be designed, built, and tested automatically. In recent years, researchers have delved into artificial intelligence techniques to support, assist, and even drive game development. Such techniques include procedural content generation, automated narration, player modelling and adaptation, and automated game design. This research is still very young, but already the games industry is taking small steps to integrate some of these techniques in their approach to design. The goal of this seminar was to bring together researchers and industry representatives who work at the forefront of artificial intelligence (AI) and computational intelligence (CI) in games, to (1) explore and extend the possibilities of AI-driven game design, (2) to identify the most viable applications of AI-driven game design in the game industry, and (3) to investigate new approaches to AI-driven game design. To this end, the seminar included a wide range of researchers and developers, including specialists in AI/CI for abstract games, commercial video games, and serious games. Thus, it fostered a better understanding of and unified vision on AI-driven game design, using input from both scientists as well as AI specialists from industry. Seminar November 19–24, 2017 – http://www.dagstuhl.de/17471 1998 ACM Subject Classification I.2.1 Artificial Intelligence Games",
"title": ""
},
{
"docid": "da61b8bd6c1951b109399629f47dad16",
"text": "In this paper, we introduce an approach for distributed nonlinear control of multiple hovercraft-type underactuated vehicles with bounded and unidirectional inputs. First, a bounded nonlinear controller is given for stabilization and tracking of a single vehicle, using a cascade backstepping method. Then, this controller is combined with a distributed gradient-based control for multi-vehicle formation stabilization using formation potential functions previously constructed. The vehicles are used in the Caltech Multi-Vehicle Wireless Testbed (MVWT). We provide simulation and experimental results for stabilization and tracking of a single vehicle, and a simulation of stabilization of a six-vehicle formation, demonstrating that in all cases the control bounds and the control objective are satisfied.",
"title": ""
},
{
"docid": "48b88774957a6d30ae9d0a97b9643647",
"text": "The defect detection on manufactures is extremely important in the optimization of industrial processes; particularly, the visual inspection plays a fundamental role. The visual inspection is often carried out by a human expert. However, new technology features have made this inspection unreliable. For this reason, many researchers have been engaged to develop automatic analysis processes of manufactures and automatic optical inspections in the industrial production of printed circuit boards. Among the defects that could arise in this industrial process, those of the solder joints are very important, because they can lead to an incorrect functioning of the board; moreover, the amount of the solder paste can give some information on the quality of the industrial process. In this paper, a neural network-based automatic optical inspection system for the diagnosis of solder joint defects on printed circuit boards assembled in surface mounting technology is presented. The diagnosis is handled as a pattern recognition problem with a neural network approach. Five types of solder joints have been classified in respect to the amount of solder paste in order to perform the diagnosis with a high recognition rate and a detailed classification able to give information on the quality of the manufacturing process. The images of the boards under test are acquired and then preprocessed to extract the region of interest for the diagnosis. Three types of feature vectors are evaluated from each region of interest, which are the images of the solder joints under test, by exploiting the properties of the wavelet transform and the geometrical characteristics of the preprocessed images. The performances of three different classifiers which are a multilayer perceptron, a linear vector quantization, and a K-nearest neighbor classifier are compared. The n-fold cross-validation has been exploited to select the best architecture for the neural classifiers, while a number of experiments have been devoted to estimating the best value of K in the K-NN. The results have proved that the MLP network fed with the GW-features has the best recognition rate. This approach allows to carry out the diagnosis burden on image processing, feature extraction, and classification algorithms, reducing the cost and the complexity of the acquisition system. In fact, the experimental results suggest that the reason for the high recognition rate in the solder joint classification is due to the proper preprocessing steps followed as well as to the information contents of the features",
"title": ""
},
{
"docid": "80a4de6098a4821e52ccc760db2aae18",
"text": "This article presents P-Sense, a participatory sensing application for air pollution monitoring and control. The paper describes in detail the system architecture and individual components of a successfully implemented application. In addition, the paper points out several other research-oriented problems that need to be addressed before these applications can be effectively implemented in practice, in a large-scale deployment. Security, privacy, data visualization and validation, and incentives are part of our work-in-progress activities",
"title": ""
}
] | scidocsrr |
ab640c04dd25df53ae412ac5ce28c102 | Neural Stance Detectors for Fake News Challenge | [
{
"docid": "0201a5f0da2430ec392284938d4c8833",
"text": "Natural language sentence matching is a fundamental technology for a variety of tasks. Previous approaches either match sentences from a single direction or only apply single granular (wordby-word or sentence-by-sentence) matching. In this work, we propose a bilateral multi-perspective matching (BiMPM) model. Given two sentences P and Q, our model first encodes them with a BiLSTM encoder. Next, we match the two encoded sentences in two directions P against Q and Q against P . In each matching direction, each time step of one sentence is matched against all timesteps of the other sentence from multiple perspectives. Then, another BiLSTM layer is utilized to aggregate the matching results into a fixed-length matching vector. Finally, based on the matching vector, a decision is made through a fully connected layer. We evaluate our model on three tasks: paraphrase identification, natural language inference and answer sentence selection. Experimental results on standard benchmark datasets show that our model achieves the state-of-the-art performance on all tasks.",
"title": ""
},
{
"docid": "a0e4080652269445c6e36b76d5c8cd09",
"text": "Enabling a computer to understand a document so that it can answer comprehension questions is a central, yet unsolved goal of NLP. A key factor impeding its solution by machine learned systems is the limited availability of human-annotated data. Hermann et al. (2015) seek to solve this problem by creating over a million training examples by pairing CNN and Daily Mail news articles with their summarized bullet points, and show that a neural network can then be trained to give good performance on this task. In this paper, we conduct a thorough examination of this new reading comprehension task. Our primary aim is to understand what depth of language understanding is required to do well on this task. We approach this from one side by doing a careful hand-analysis of a small subset of the problems and from the other by showing that simple, carefully designed systems can obtain accuracies of 72.4% and 75.8% on these two datasets, exceeding current state-of-the-art results by over 5% and approaching what we believe is the ceiling for performance on this task.1",
"title": ""
}
] | [
{
"docid": "c4f0e371ea3950e601f76f8d34b736e3",
"text": "Discretization is an essential preprocessing technique used in many knowledge discovery and data mining tasks. Its main goal is to transform a set of continuous attributes into discrete ones, by associating categorical values to intervals and thus transforming quantitative data into qualitative data. In this manner, symbolic data mining algorithms can be applied over continuous data and the representation of information is simplified, making it more concise and specific. The literature provides numerous proposals of discretization and some attempts to categorize them into a taxonomy can be found. However, in previous papers, there is a lack of consensus in the definition of the properties and no formal categorization has been established yet, which may be confusing for practitioners. Furthermore, only a small set of discretizers have been widely considered, while many other methods have gone unnoticed. With the intention of alleviating these problems, this paper provides a survey of discretization methods proposed in the literature from a theoretical and empirical perspective. From the theoretical perspective, we develop a taxonomy based on the main properties pointed out in previous research, unifying the notation and including all the known methods up to date. Empirically, we conduct an experimental study in supervised classification involving the most representative and newest discretizers, different types of classifiers, and a large number of data sets. The results of their performances measured in terms of accuracy, number of intervals, and inconsistency have been verified by means of nonparametric statistical tests. Additionally, a set of discretizers are highlighted as the best performing ones.",
"title": ""
},
{
"docid": "c5122000c9d8736cecb4d24e6f56aab8",
"text": "New credit cards containing Europay, MasterCard and Visa (EMV) chips for enhanced security used in-store purchases rather than online purchases have been adopted considerably. EMV supposedly protects the payment cards in such a way that the computer chip in a card referred to as chip-and-pin cards generate a unique one time code each time the card is used. The one time code is designed such that if it is copied or stolen from the merchant system or from the system terminal cannot be used to create a counterfeit copy of that card or counterfeit chip of the transaction. However, in spite of this design, EMV technology is not entirely foolproof from failure. In this paper we discuss the issues, failures and fraudulent cases associated with EMV Chip-And-Card technology.",
"title": ""
},
{
"docid": "0d8c38444954a0003117e7334195cb00",
"text": "Although mature technologies exist for acquiring images, geometry, and normals of small objects, they remain cumbersome and time-consuming for non-experts to employ on a large scale. In an archaeological setting, a practical acquisition system for routine use on every artifact and fragment would open new possibilities for archiving, analysis, and dissemination. We present an inexpensive system for acquiring all three types of information, and associated metadata, for small objects such as fragments of wall paintings. The acquisition system requires minimal supervision, so that a single, non-expert user can scan at least 10 fragments per hour. To achieve this performance, we introduce new algorithms to robustly and automatically align range scans, register 2-D scans to 3-D geometry, and compute normals from 2-D scans. As an illustrative application, we present a novel 3-D matching algorithm that efficiently searches for matching fragments using the scanned geometry.",
"title": ""
},
{
"docid": "bccb8e4cf7639dbcd3896e356aceec8d",
"text": "Over 50 million people worldwide suffer from epilepsy. Traditional diagnosis of epilepsy relies on tedious visual screening by highly trained clinicians from lengthy EEG recording that contains the presence of seizure (ictal) activities. Nowadays, there are many automatic systems that can recognize seizure-related EEG signals to help the diagnosis. However, it is very costly and inconvenient to obtain long-term EEG data with seizure activities, especially in areas short of medical resources. We demonstrate in this paper that we can use the interictal scalp EEG data, which is much easier to collect than the ictal data, to automatically diagnose whether a person is epileptic. In our automated EEG recognition system, we extract three classes of features from the EEG data and build Probabilistic Neural Networks (PNNs) fed with these features. We optimize the feature extraction parameters and combine these PNNs through a voting mechanism. As a result, our system achieves an impressive 94.07% accuracy.",
"title": ""
},
{
"docid": "129c1b9a723b062a52b821988d124486",
"text": "Modern applications employ text files widely for providing data storage in a readable format for applications ranging from database systems to mobile phones. Traditional text processing tools are built around a byte-at-a-time sequential processing model that introduces significant branch and cache miss penalties. Recent work has explored an alternative, transposed representation of text, Parabix (Parallel Bit Streams), to accelerate scanning and parsing using SIMD facilities. This paper advocates and develops Parabix as a general framework and toolkit, describing the software toolchain and run-time support that allows applications to exploit modern SIMD instructions for high performance text processing. The goal is to generalize the techniques to ensure that they apply across a wide variety of applications and architectures. The toolchain enables the application developer to write constructs assuming unbounded character streams and Parabix's code translator generates code based on machine specifics (e.g., SIMD register widths). The general argument in support of Parabix technology is made by a detailed performance and energy study of XML parsing across a range of processor architectures. Parabix exploits intra-core SIMD hardware and demonstrates 2×-7× speedup and 4× improvement in energy efficiency when compared with two widely used conventional software parsers, Expat and Apache-Xerces. SIMD implementations across three generations of x86 processors are studied including the new SandyBridge. The 256-bit AVX technology in Intel SandyBridge is compared with the well established 128-bit SSE technology to analyze the benefits and challenges of 3-operand instruction formats and wider SIMD hardware. Finally, the XML program is partitioned into pipeline stages to demonstrate that thread-level parallelism enables the application to exploit SIMD units scattered across the different cores, achieving improved performance (2× on 4 cores) while maintaining single-threaded energy levels.",
"title": ""
},
{
"docid": "7fcd8eee5f2dccffd3431114e2b0ed5a",
"text": "Crowdsourcing is becoming more and more important for commercial purposes. With the growth of crowdsourcing platforms like Amazon Mechanical Turk or Microworkers, a huge work force and a large knowledge base can be easily accessed and utilized. But due to the anonymity of the workers, they are encouraged to cheat the employers in order to maximize their income. Thus, this paper we analyze two widely used crowd-based approaches to validate the submitted work. Both approaches are evaluated with regard to their detection quality, their costs and their applicability to different types of typical crowdsourcing tasks.",
"title": ""
},
{
"docid": "ba4121003eb56d3ab6aebe128c219ab7",
"text": "Mediation is said to occur when a causal effect of some variable X on an outcome Y is explained by some intervening variable M. The authors recommend that with small to moderate samples, bootstrap methods (B. Efron & R. Tibshirani, 1993) be used to assess mediation. Bootstrap tests are powerful because they detect that the sampling distribution of the mediated effect is skewed away from 0. They argue that R. M. Baron and D. A. Kenny's (1986) recommendation of first testing the X --> Y association for statistical significance should not be a requirement when there is a priori belief that the effect size is small or suppression is a possibility. Empirical examples and computer setups for bootstrap analyses are provided.",
"title": ""
},
{
"docid": "43e630794f1bce27688d2cedbb19f17d",
"text": "The systematic maintenance of mining machinery and equipment is the crucial factor for the proper functioning of a mine without production process interruption. For high-quality maintenance of the technical systems in mining, it is necessary to conduct a thorough analysis of machinery and accompanying elements in order to determine the critical elements in the system which are prone to failures. The risk assessment of the failures of system parts leads to obtaining precise indicators of failures which are also excellent guidelines for maintenance services. This paper presents a model of the risk assessment of technical systems failure based on the fuzzy sets theory, fuzzy logic and min–max composition. The risk indicators, severity, occurrence and detectability are analyzed. The risk indicators are given as linguistic variables. The model presented was applied for assessing the risk level of belt conveyor elements failure which works in severe conditions in a coal mine. Moreover, this paper shows the advantages of this model when compared to a standard procedure of RPN calculating – in the FMEA method of risk",
"title": ""
},
{
"docid": "7533347e8c5daf17eb09e64db0fa4394",
"text": "Android has become the most popular smartphone operating system. This rapidly increasing adoption of Android has resulted in significant increase in the number of malwares when compared with previous years. There exist lots of antimalware programs which are designed to effectively protect the users’ sensitive data in mobile systems from such attacks. In this paper, our contribution is twofold. Firstly, we have analyzed the Android malwares and their penetration techniques used for attacking the systems and antivirus programs that act against malwares to protect Android systems. We categorize many of the most recent antimalware techniques on the basis of their detection methods. We aim to provide an easy and concise view of the malware detection and protection mechanisms and deduce their benefits and limitations. Secondly, we have forecast Android market trends for the year up to 2018 and provide a unique hybrid security solution and take into account both the static and dynamic analysis an android application. Keywords—Android; Permissions; Signature",
"title": ""
},
{
"docid": "4ac26e974e2d3861659323ae2aa7323c",
"text": "Episacral lipoma is a small, tender subcutaneous nodule primarily occurring over the posterior iliac crest. Episacral lipoma is a significant and treatable cause of acute and chronic low back pain. Episacral lipoma occurs as a result of tears in the thoracodorsal fascia and subsequent herniation of a portion of the underlying dorsal fat pad through the tear. This clinical entity is common, and recognition is simple. The presence of a painful nodule with disappearance of pain after injection with anaesthetic, is diagnostic. Medication and physical therapy may not be effective. Local injection of the nodule with a solution of anaesthetic and steroid is effective in treating the episacral lipoma. Here we describe 2 patients with painful nodules over the posterior iliac crest. One patient complained of severe lower back pain radiating to the left lower extremity and this patient subsequently underwent disc operation. The other patient had been treated for greater trochanteric pain syndrome. In both patients, symptoms appeared to be relieved by local injection of anaesthetic and steroid. Episacral lipoma should be considered during diagnostic workup and in differential diagnosis of acute and chronic low back pain.",
"title": ""
},
{
"docid": "4244af4f70e49c3e08e3943a88c79645",
"text": "From a dynamic system point of view, bat locomotion stands out among other forms of flight. During a large part of bat wingbeat cycle the moving body is not in a static equilibrium. This is in sharp contrast to what we observe in other simpler forms of flight such as insects, which stay at their static equilibrium. Encouraged by biological examinations that have revealed bats exhibit periodic and stable limit cycles, this work demonstrates that one effective approach to stabilize articulated flying robots with bat morphology is locating feasible limit cycles for these robots; then, designing controllers that retain the closed-loop system trajectories within a bounded neighborhood of the designed periodic orbits. This control design paradigm has been evaluated in practice on a recently developed bio-inspired robot called Bat Bot (B2).",
"title": ""
},
{
"docid": "79833f074b2e06d5c56898ca3f008c00",
"text": "Regular expressions have served as the dominant workhorse of practical information extraction for several years. However, there has been little work on reducing the manual effort involved in building high-quality, complex regular expressions for information extraction tasks. In this paper, we propose ReLIE, a novel transformation-based algorithm for learning such complex regular expressions. We evaluate the performance of our algorithm on multiple datasets and compare it against the CRF algorithm. We show that ReLIE, in addition to being an order of magnitude faster, outperforms CRF under conditions of limited training data and cross-domain data. Finally, we show how the accuracy of CRF can be improved by using features extracted by ReLIE.",
"title": ""
},
{
"docid": "13ae9c0f1c802de86b80906558b27713",
"text": "Anaerobic saccharolytic bacteria thriving at high pH values were studied in a cellulose-degrading enrichment culture originating from the alkaline lake, Verkhneye Beloye (Central Asia). In situ hybridization of the enrichment culture with 16S rRNA-targeted probes revealed that abundant, long, thin, rod-shaped cells were related to Cytophaga. Bacteria of this type were isolated with cellobiose and five isolates were characterized. Isolates were thin, flexible, gliding rods. They formed a spherical cyst-like structure at one cell end during the late growth phase. The pH range for growth was 7.5–10.2, with an optimum around pH 8.5. Cultures produced a pinkish pigment tentatively identified as a carotenoid. Isolates did not degrade cellulose, indicating that they utilized soluble products formed by so far uncultured hydrolytic cellulose degraders. Besides cellobiose, the isolates utilized other carbohydrates, including xylose, maltose, xylan, starch, and pectin. The main organic fermentation products were propionate, acetate, and succinate. Oxygen, which was not used as electron acceptor, impaired growth. A representative isolate, strain Z-7010, with Marinilabilia salmonicolor as the closest relative, is described as a new genus and species, Alkaliflexus imshenetskii. This is the first cultivated alkaliphilic anaerobic member of the Cytophaga/Flavobacterium/Bacteroides phylum.",
"title": ""
},
{
"docid": "804322502b82ad321a0f97d6f83858ee",
"text": "Cheating is a real problem in the Internet of Things. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. The problem, however, isnt inherent in whether or not to embrace the idea of an open platform and open-source software, but to establish a methodology to verify the trustworthiness and control any access. This paper focuses on building an access control model and system based on trust computing. This is a new field of access control techniques which includes Access Control, Trust Computing, Internet of Things, network attacks, and cheating technologies. Nevertheless, the target access control systems can be very complex to manage. This paper presents an overview of the existing work on trust computing, access control models and systems in IoT. It not only summarizes the latest research progress, but also provides an understanding of the limitations and open issues of the existing work. It is expected to provide useful guidelines for future research. Access Control, Trust Management, Internet of Things Today, our world is characterized by increasing connectivity. Things in this world are increasingly being connected. Smart phones have started an era of global proliferation and rapid consumerization of smart devices. It is predicted that the next disruptive transformation will be the concept of ‘Internet of Things’ [2]. From networked computers to smart devices, and to connected people, we are now moving towards connected ‘things’. Items of daily use are being turned into smart devices as various sensors are embedded in consumer and enterprise equipment, industrial and household appliances and personal devices. Pervasive connectivity mechanisms build bridges between our clothing and vehicles. Interaction among these things/devices can happen with little or no human intervention, thereby conjuring an enormous network, namely the Internet of Things (IoT). One of the primary goals behind IoT is to sense and send data over remote locations to enable detection of significant events, and take relevant actions sooner rather than later [25]. This technological trend is being pursued actively in all areas including the medical and health care fields. IoT provides opportunities to dramatically improve many medical applications, such as glucose level sensing, remote health monitoring (e.g. electrocardiogram, blood pressure, body temperature, and oxygen saturation monitoring, etc), rehabilitation systems, medication management, and ambient assisted living systems. The connectivity offered by IoT extends from humanto-machine to machine-to-machine communications. The interconnected devices collect all kinds of data about patients. Intelligent and ubiquitous services can then be built upon the useful information extracted from the data. During the data aggregation, fusion, and analysis processes, user ar X iv :1 61 0. 01 06 5v 1 [ cs .C R ] 4 O ct 2 01 6 2 Z. Yunpeng and X. Wu privacy and information security become major concerns for IoT services and applications. Security breaches will seriously compromise user acceptance and consumption on IoT applications in the medical and health care areas. The large scale of integration of heterogeneous devices in IoT poses a great challenge for the provision of standard security services. Many IoT devices are vulnerable to attacks since no high-level intelligence can be enabled on these passive devices [10], and security vulnerabilities in products uncovered by researchers have spread from cars [13] to garage doors [9] and to skateboards [35]. Technological utopianism surrounding IoT was very real until the emergence of the Volkswagen emissions scandal [4]. The German conglomerate admitted installing software in its diesel cars that recognizes and identifies patterns when vehicles are being tested for nitrogen oxide emissions and cuts them so that they fall within the limits prescribed by US regulators (004 g/km). Once the test is over, the car returns to its normal state: emitting nitrogen oxides (nitric oxide and nitrogen dioxide) at up to 35 times the US legal limit. The focus of IoT is not the thing itself, but the data generated by the devices and the value therein. What Volkswagen has brought to light goes far beyond protecting data and privacy, preventing intrusion, and keeping the integrity of the data. It casts doubts on the credibility of the IoT industry and its ability to secure data, reach agreement on standards, or indeed guarantee that consumer privacy rights are upheld. All in all, IoT holds tremendous potential to improve our health, make our environment safer, boost productivity and efficiency, and conserve both water and energy. IoT needs to improve its trustworthiness, however, before it can be used to solve challenging economic and environmental problems tied to our social lives. The fundamental question that needs to be answered is how we can trust the validity of the data being generated in the first place. If a node of IoT cheats, how does a system identify the cheating node and prevent a malicious attack from misbehaving nodes? This paper focuses on an access control mechanism that will only grant network access permission to trustworthy nodes. Embedding trust management into access control will improve the systems ability to discover untrustworthy participating nodes and prevent discriminatory attacks. There has been substantial research in this domain, most of which has been related to attacks like self-promotion and ballot stuffing where a node falsely promotes its importance and boosts the reputation of a malicious node (by providing good recommendations) to engage in a collusion-style attack. The traditional trust computation model is inefficient in differentiating a participant object in IoT, which is designed to win trust by cheating. In particular, the trust computation model will fail when a malicious node intelligently adjusts its behavior to hide its defect and obtain a higher trust value for its own gain. 1 Access Control Model and System IoT comprises the following three Access Control types Access Control in Internet of Things: A Survey 3 – Role-based access control (RBAC) – Credential-based access control (CBAC) — in order to access some resources and data, users require certain certificate information that falls into the following two types: 1. Attribute-Based access control (ABAC) : If a user has some special attributes, it is possible to access a particular resource or piece of data. 2. Capability-Based access control (Cap-BAC): A capability is a communicable, unforgeable rights markup, which corresponds to a value that uniquely specifies certain access rights to objects owned by subjects. – Trust-based access control (TBAC) In addition, there are also combinations of the aforementioned three methods. In order to improve the security of the system, some of the access control methods include encryption and key management mechanisms.",
"title": ""
},
{
"docid": "3d81867b694a7fa56383583d9ee2637f",
"text": "Elasticity is undoubtedly one of the most striking characteristics of cloud computing. Especially in the area of high performance computing (HPC), elasticity can be used to execute irregular and CPU-intensive applications. However, the on- the-fly increase/decrease in resources is more widespread in Web systems, which have their own IaaS-level load balancer. Considering the HPC area, current approaches usually focus on batch jobs or assumptions such as previous knowledge of application phases, source code rewriting or the stop-reconfigure-and-go approach for elasticity. In this context, this article presents AutoElastic, a PaaS-level elasticity model for HPC in the cloud. Its differential approach consists of providing elasticity for high performance applications without user intervention or source code modification. The scientific contributions of AutoElastic are twofold: (i) an Aging-based approach to resource allocation and deallocation actions to avoid unnecessary virtual machine (VM) reconfigurations (thrashing) and (ii) asynchronism in creating and terminating VMs in such a way that the application does not need to wait for completing these procedures. The prototype evaluation using OpenNebula middleware showed performance gains of up to 26 percent in the execution time of an application with the AutoElastic manager. Moreover, we obtained low intrusiveness for AutoElastic when reconfigurations do not occur.",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "1107a5b766a3d471ae00c9e02d8592da",
"text": "In this paper, a wideband dual polarized self-complementary connected array antenna with low radar cross section (RCS) under normal and oblique incidence is presented. First, an analytical model of the multilayer structure is proposed in order to obtain a fast and reliable predimensioning tool providing an optimized design of the infinite array. The accuracy of this model is demonstrated thanks to comparative simulations with a full wave analysis software. RCS reduction compared to a perfectly conducting flat plate of at least 10 dB has been obtained over an ultrawide bandwidth of nearly 7:1 at normal incidence and 5:1 (3.8 to 19 GHz) at 60° in both polarizations. These performances are confirmed by finite element tearing and interconnecting computations of finite arrays of different sizes. Finally, the realization of a $28 \\times 28$ cell prototype and measurement results are detailed.",
"title": ""
},
{
"docid": "1e4a86dcc05ff3d593a4bf7b88f8b23a",
"text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.",
"title": ""
},
{
"docid": "889747dbf541583475cbce74c42dc616",
"text": "This paper presents an analysis of FastSLAM - a Rao-Blackwellised particle filter formulation of simultaneous localisation and mapping. It shows that the algorithm degenerates with time, regardless of the number of particles used or the density of landmarks within the environment, and would always produce optimistic estimates of uncertainty in the long-term. In essence, FastSLAM behaves like a non-optimal local search algorithm; in the short-term it may produce consistent uncertainty estimates but, in the long-term, it is unable to adequately explore the state-space to be a reasonable Bayesian estimator. However, the number of particles and landmarks does affect the accuracy of the estimated mean and, given sufficient particles, FastSLAM can produce good non-stochastic estimates in practice. FastSLAM also has several practical advantages, particularly with regard to data association, and would probably work well in combination with other versions of stochastic SLAM, such as EKF-based SLAM",
"title": ""
}
] | scidocsrr |
15a2ffc1ca94feb12059ba5d4285a66c | Learning Decision Trees Using the Area Under the ROC Curve | [
{
"docid": "e9017607252973b36f9d4c3c659fe858",
"text": "In this paper, we address the problem of retrospectively pruning decision trees induced from data, according to a topdown approach. This problem has received considerable attention in the areas of pattern recognition and machine learning, and many distinct methods have been proposed in literature. We make a comparative study of six well-known pruning methods with the aim of understanding their theoretical foundations, their computational complexity, and the strengths and weaknesses of their formulation. Comments on the characteristics of each method are empirically supported. In particular, a wide experimentation performed on several data sets leads us to opposite conclusions on the predictive accuracy of simplified trees from some drawn in the literature. We attribute this divergence to differences in experimental designs. Finally, we prove and make use of a property of the reduced error pruning method to obtain an objective evaluation of the tendency to overprune/underprune observed in each method. Index Terms —Decision trees, top-down induction of decision trees, simplification of decision trees, pruning and grafting operators, optimal pruning, comparative studies. —————————— ✦ ——————————",
"title": ""
},
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
}
] | [
{
"docid": "9a48e31b5911e68b11c846d543f897be",
"text": "Today’s smartphone users face a security dilemma: many apps they install operate on privacy-sensitive data, although they might originate from developers whose trustworthiness is hard to judge. Researchers have addressed the problem with more and more sophisticated static and dynamic analysis tools as an aid to assess how apps use private user data. Those tools, however, rely on the manual configuration of lists of sources of sensitive data as well as sinks which might leak data to untrusted observers. Such lists are hard to come by. We thus propose SUSI, a novel machine-learning guided approach for identifying sources and sinks directly from the code of any Android API. Given a training set of hand-annotated sources and sinks, SUSI identifies other sources and sinks in the entire API. To provide more fine-grained information, SUSI further categorizes the sources (e.g., unique identifier, location information, etc.) and sinks (e.g., network, file, etc.). For Android 4.2, SUSI identifies hundreds of sources and sinks with over 92% accuracy, many of which are missed by current information-flow tracking tools. An evaluation of about 11,000 malware samples confirms that many of these sources and sinks are indeed used. We furthermore show that SUSI can reliably classify sources and sinks even in new, previously unseen Android versions and components like Google Glass or",
"title": ""
},
{
"docid": "b15dcda2b395d02a2df18f6d8bfa3b19",
"text": "We present a method for human pose tracking that learns explicitly about the dynamic effects of human motion on joint appearance. In contrast to previous techniques which employ generic tools such as dense optical flow or spatiotemporal smoothness constraints to pass pose inference cues between frames, our system instead learns to predict joint displacements from the previous frame to the current frame based on the possibly changing appearance of relevant pixels surrounding the corresponding joints in the previous frame. This explicit learning of pose deformations is formulated by incorporating concepts from human pose estimation into an optical flow-like framework. With this approach, state-of-the-art performance is achieved on standard benchmarks for various pose tracking tasks including 3D body pose tracking in RGB video, 3D hand pose tracking in depth sequences, and 3D hand gesture tracking in RGB video.",
"title": ""
},
{
"docid": "8025825afec9258d9a0a3da1f609f4ef",
"text": "The task of measuring sentence similarity is defined as determining how similar the meanings of two sentences are. Computing sentence similarity is not a trivial task, due to the variability of natural language expressions. Measuring semantic similarity of sentences is closely related to semantic similarity between words. It makes a relationship between a word and the sentence through their meanings. The intention is to enhance the concepts of semantics over the syntactic measures that are able to categorize the pair of sentences effectively. Semantic similarity plays a vital role in Natural language processing, Informational Retrieval, Text Mining, Q & A systems, text-related research and application area. Traditional similarity measures are based on the syntactic features and other path based measures. In this project, we evaluated and tested three different semantic similarity approaches like cosine similarity, path based approach (wu – palmer and shortest path based), and feature based approach. Our proposed approaches exploits preprocessing of pair of sentences which identifies the bag of words and then applying the similarity measures like cosine similarity, path based similarity measures. In our approach the main contributions are comparison of existing similarity measures and feature based measure based on Wordnet. In feature based approach we perform the tagging and lemmatization and generates the similarity score based on the nouns and verbs. We evaluate our project output by comparing the existing measures based on different thresholds and comparison between three approaches. Finally we conclude that feature based measure generates better semantic score.",
"title": ""
},
{
"docid": "cf1720877ddc4400bdce2a149b5ec8b4",
"text": "How do we find patterns in author-keyword associations, evolving over time? Or in data cubes (tensors), with product-branchcustomer sales information? And more generally, how to summarize high-order data cubes (tensors)? How to incrementally update these patterns over time? Matrix decompositions, like principal component analysis (PCA) and variants, are invaluable tools for mining, dimensionality reduction, feature selection, rule identification in numerous settings like streaming data, text, graphs, social networks, and many more settings. However, they have only two orders (i.e., matrices, like author and keyword in the previous example).\n We propose to envision such higher-order data as tensors, and tap the vast literature on the topic. However, these methods do not necessarily scale up, let alone operate on semi-infinite streams. Thus, we introduce a general framework, incremental tensor analysis (ITA), which efficiently computes a compact summary for high-order and high-dimensional data, and also reveals the hidden correlations. Three variants of ITA are presented: (1) dynamic tensor analysis (DTA); (2) streaming tensor analysis (STA); and (3) window-based tensor analysis (WTA). In paricular, we explore several fundamental design trade-offs such as space efficiency, computational cost, approximation accuracy, time dependency, and model complexity.\n We implement all our methods and apply them in several real settings, such as network anomaly detection, multiway latent semantic indexing on citation networks, and correlation study on sensor measurements. Our empirical studies show that the proposed methods are fast and accurate and that they find interesting patterns and outliers on the real datasets.",
"title": ""
},
{
"docid": "fe7668dd82775cf02116faacd1dd945f",
"text": "In the last years, the advent of unmanned aerial vehicles (UAVs) for civilian remote sensing purposes has generated a lot of interest because of the various new applications they can offer. One of them is represented by the automatic detection and counting of cars. In this paper, we propose a novel car detection method. It starts with a feature extraction process based on scalar invariant feature transform (SIFT) thanks to which a set of keypoints is identified in the considered image and opportunely described. Successively, the process discriminates between keypoints assigned to cars and those associated with all remaining objects by means of a support vector machine (SVM) classifier. Experimental results have been conducted on a real UAV scene. They show how the proposed method allows providing interesting detection performances.",
"title": ""
},
{
"docid": "5a063c2373aa849b59e20e6115a4df54",
"text": "A GUI skeleton is the starting point for implementing a UI design image. To obtain a GUI skeleton from a UI design image, developers have to visually understand UI elements and their spatial layout in the image, and then translate this understanding into proper GUI components and their compositions. Automating this visual understanding and translation would be beneficial for bootstraping mobile GUI implementation, but it is a challenging task due to the diversity of UI designs and the complexity of GUI skeletons to generate. Existing tools are rigid as they depend on heuristically-designed visual understanding and GUI generation rules. In this paper, we present a neural machine translator that combines recent advances in computer vision and machine translation for translating a UI design image into a GUI skeleton. Our translator learns to extract visual features in UI images, encode these features' spatial layouts, and generate GUI skeletons in a unified neural network framework, without requiring manual rule development. For training our translator, we develop an automated GUI exploration method to automatically collect large-scale UI data from real-world applications. We carry out extensive experiments to evaluate the accuracy, generality and usefulness of our approach.",
"title": ""
},
{
"docid": "3d862e488798629d633f78260a569468",
"text": "Training workshops and professional meetings are important tools for capacity building and professional development. These social events provide professionals and educators a platform where they can discuss and exchange constructive ideas, and receive feedback. In particular, competition-based training workshops where participants compete on solving similar and common challenging problems are effective tools for stimulating students’ learning and aspirations. This paper reports the results of a two-day training workshop where memory and disk forensics were taught using a competition-based security educational tool. The workshop included training sessions for professionals, educators, and students to learn features of Tracer FIRE, a competition-based digital forensics and assessment tool, developed by Sandia National Laboratories. The results indicate that competitionbased training can be very effective in stimulating students’ motivation to learn. However, extra caution should be taken into account when delivering these types of training workshops. Keywords-component; cyber security, digital forenciscs, partcipatory training workshop, competition-based learning,",
"title": ""
},
{
"docid": "95d1a35068e7de3293f8029e8b8694f9",
"text": "Botnet is one of the major threats on the Internet for committing cybercrimes, such as DDoS attacks, stealing sensitive information, spreading spams, etc. It is a challenging issue to detect modern botnets that are continuously improving for evading detection. In this paper, we propose a machine learning based botnet detection system that is shown to be effective in identifying P2P botnets. Our approach extracts convolutional version of effective flow-based features, and trains a classification model by using a feed-forward artificial neural network. The experimental results show that the accuracy of detection using the convolutional features is better than the ones using the traditional features. It can achieve 94.7% of detection accuracy and 2.2% of false positive rate on the known P2P botnet datasets. Furthermore, our system provides an additional confidence testing for enhancing performance of botnet detection. It further classifies the network traffic of insufficient confidence in the neural network. The experiment shows that this stage can increase the detection accuracy up to 98.6% and decrease the false positive rate up to 0.5%.",
"title": ""
},
{
"docid": "802b1bf3a263d9641dc7dc689f7eab10",
"text": "Type I membrane oscillators such as the Connor model (Connor et al. 1977) and the Morris-Lecar model (Morris and Lecar 1981) admit very low frequency oscillations near the critical applied current. Hansel et al. (1995) have numerically shown that synchrony is difficult to achieve with these models and that the phase resetting curve is strictly positive. We use singular perturbation methods and averaging to show that this is a general property of Type I membrane models. We show in a limited sense that so called Type II resetting occurs with models that obtain rhythmicity via a Hopf bifurcation. We also show the differences between synapses that act rapidly and those that act slowly and derive a canonical form for the phase interactions.",
"title": ""
},
{
"docid": "2ceedf1be1770938c94892c80ae956e4",
"text": "Although there is interest in the educational potential of online multiplayer games and virtual worlds, there is still little evidence to explain specifically what and how people learn from these environments. This paper addresses this issue by exploring the experiences of couples that play World of Warcraft together. Learning outcomes were identified (involving the management of ludic, social and material resources) along with learning processes, which followed Wenger’s model of participation in Communities of Practice. Comparing this with existing literature suggests that productive comparisons can be drawn with the experiences of distance education students and the social pressures that affect their participation. Introduction Although there is great interest in the potential that computer games have in educational settings (eg, McFarlane, Sparrowhawk & Heald, 2002), and their relevance to learning more generally (eg, Gee, 2003), there has been relatively little in the way of detailed accounts of what is actually learnt when people play (Squire, 2002), and still less that relates such learning to formal education. In this paper, we describe a study that explores how people learn when they play the massively multiplayer online role-playing game (MMORPG), World of Warcraft. Detailed, qualitative research was undertaken with couples to explore their play, adopting a social perspective on learning. The paper concludes with a discussion that relates this to formal curricula and considers the implications for distance learning. British Journal of Educational Technology Vol 40 No 3 2009 444–457 doi:10.1111/j.1467-8535.2009.00948.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. Background Researchers have long been interested in games and learning. There is, for example, a tradition of work within psychology exploring what makes games motivating, and relating this to learning (eg, Malone & Lepper, 1987). Games have been recently featured in mainstream educational policy (eg, DfES, 2005), and it has been suggested (eg, Gee, 2003) that they provide a model that should inform educational practice more generally. However, research exploring how games can be used in formal education suggests that the potential value of games to support learning is not so easy to realise. McFarlane et al (2002, p. 16), for example, argued that ‘the greatest obstacle to integrating games into the curriculum is the mismatch between the skills and knowledge developed in games, and those recognised explicitly within the school system’. Mitchell and Savill-Smith (2004) noted that although games have been used to support various kinds of learning (eg, recall of content, computer literacy, strategic skills), such uses were often problematic, being complicated by the need to integrate games into existing educational contexts. Furthermore, games specifically designed to be educational were ‘typically disliked’ (p. 44) as well as being expensive to produce. Until recently, research on the use of games in education tended to focus on ‘stand alone’ or single player games. Such games can, to some extent, be assessed in terms of their content coverage or instructional design processes, and evaluated for their ‘fit’ with a given curriculum (eg, Kirriemuir, 2002). Gaming, however, is generally a social activity, and this is even more apparent when we move from a consideration of single player games to a focus on multiplayer, online games. Viewing games from a social perspective opens the possibility of understanding learning as a social achievement, not just a process of content acquisition or skills development (Squire, 2002). In this study, we focus on a particular genre of online, multiplayer game: an MMORPG. MMORPGs incorporate structural elements drawn from table-top role-playing games (Dungeons & Dragons being the classic example). Play takes place in an expansive and persistent graphically rendered world. Players form teams and guilds, undertake group missions, meet in banks and auction houses, chat, congregate in virtual cities and engage in different modes of play, which involve various forms of collaboration and competition. As Squire noted (2002), socially situated accounts of actual learning in games (as opposed to what they might, potentially, help people to learn) have been lacking, partly because the topic is so complex. How, indeed, should the ‘game’ be understood—is it limited to the rules, or the player’s interactions with these rules? Does it include other players, and all possible interactions, and extend to out-of-game related activities and associated materials such as fan forums? Such questions have methodological implications, and hint at the ambiguities that educators working with virtual worlds might face (Carr, Oliver & Burn, 2008). Learning in virtual worlds 445 © 2009 The Authors. Journal compilation © 2009 Becta. Work in this area is beginning to emerge, particularly in relation to the learning and mentoring that takes place within player ‘guilds’ and online clans (see Galarneau, 2005; Steinkuehler, 2005). However, it is interesting to note that the research emerging from a digital game studies perspective, including much of the work cited thus far, is rarely utilised by educators researching the pedagogic potentials of virtual worlds such as Second Life. This study is informed by and attempts to speak to both of these communities. Methodology The purpose of this study was to explore how people learn in such virtual worlds in general. It was decided that focusing on a MMORPG such as World of Warcraft would be practical and offer a rich opportunity to study learning. MMORPGs are games; they have rules and goals, and particular forms of progression. Expertise in a virtual world such as Second Life is more dispersed, because the range of activities is that much greater (encompassing building, playing, scripting, creating machinima or socialising, for instance). Each of these activities would involve particular forms of expertise. The ‘curriculum’ proposed by World of Warcraft is more specified. It was important to approach learning practices in this game without divorcing such phenomena from the real-world contexts in which play takes place. In order to study players’ accounts of learning and the links between their play and other aspects of their social lives, we sought participants who would interact with each other both in the context of the game and outside of it. To this end, we recruited couples that play together in the virtual environment of World of Warcraft, while sharing real space. This decision was taken to manage the potential complexity of studying social settings: couples were the simplest stable social formation that we could identify who would interact both in the context of the game and outside of this too. Interviews were conducted with five couples. These were theoretically sampled, to maximise diversity in players’ accounts (as with any theoretically sampled study, this means that no claims can be made about prevalence or typicality). Players were recruited through online guilds and real-world social networks. The first two sets of participants were sampled for convenience (two heterosexual couples); the rest were invited to participate in order to broaden this sample (one couple was chosen because they shared a single account, one where a partner had chosen to stop playing and one mother–son pairing). All participants were adults, and conventional ethical procedures to ensure informed consent were followed, as specified in the British Educational Research Association guidelines. The couples were interviewed in the game world at a location of their choosing. The interviews, which were semi-structured, were chat-logged and each lasted 60–90 minutes. The resulting transcripts were split into self-contained units (typically a single statement, or a question and answer, or a short exchange) and each was categorised 446 British Journal of Educational Technology Vol 40 No 3 2009 © 2009 The Authors. Journal compilation © 2009 Becta. thematically. The initial categories were then jointly reviewed in order to consolidate and refine them, cross-checking them with the source transcripts to ensure their relevance and coherence. At this stage, the categories included references to topics such as who started first, self-assessments of competence, forms of help, guilds, affect, domestic space and assets, ‘alts’ (multiple characters) and so on. These were then reviewed to develop a single category that might provide an overview or explanation of the process. It should be noted that although this approach was informed by ‘grounded theory’ processes as described in Glaser and Strauss (1967), it does not share their positivistic stance on the status of the model that has been developed. Instead, it accords more closely with the position taken by Charmaz (2000), who recognises the central role of the researcher in shaping the data collected and making sense of it. What is produced therefore is seen as a socially constructed model, based on personal narratives, rather than an objective account of an independent reality. Reviewing the categories that emerged in this case led to ‘management of resources’ being selected as a general marker of learning. As players moved towards greater competence, they identified and leveraged an increasingly complex array of in-game resources, while also negotiating real-world resources and demands. To consider this framework in greater detail, ‘management of resources’ was subdivided into three categories: ludic (concerning the skills, knowledge and practices of game play), social and material (concerning physical resources such as the embodied setting for play) (see Carr & Oliver, 2008). Using this explanation of learning, the transcripts were re-reviewed in order to ",
"title": ""
},
{
"docid": "39673b789ee8d8c898c93b7627b31f0a",
"text": "In this position paper, we initiate a systematic treatment of reaching consensus in a permissionless network. We prove several simple but hopefully insightful lower bounds that demonstrate exactly why reaching consensus in a permission-less setting is fundamentally more difficult than the classical, permissioned setting. We then present a simplified proof of Nakamoto's blockchain which we recommend for pedagogical purposes. Finally, we survey recent results including how to avoid well-known painpoints in permissionless consensus, and how to apply core ideas behind blockchains to solve consensus in the classical, permissioned setting and meanwhile achieve new properties that are not attained by classical approaches.",
"title": ""
},
{
"docid": "590ad5ce089e824d5e9ec43c54fa3098",
"text": "The abstraction of a shared memory is of growing importance in distributed computing systems. Traditional memory consistency ensures that all processes agree on a common order of all operations on memory. Unfortunately, providing these guarantees entails access latencies that prevent scaling to large systems. This paper weakens such guarantees by definingcausal memory, an abstraction that ensures that processes in a system agree on the relative ordering of operations that arecausally related. Because causal memory isweakly consistent, it admits more executions, and hence more concurrency, than either atomic or sequentially consistent memories. This paper provides a formal definition of causal memory and gives an implementation for message-passing systems. In addition, it describes a practical class of programs that, if developed for a strongly consistent memory, run correctly with causal memory.",
"title": ""
},
{
"docid": "01800567648367a34aa80a3161a21871",
"text": "Single-image haze-removal is challenging due to limited information contained in one single image. Previous solutions largely rely on handcrafted priors to compensate for this deficiency. Recent convolutional neural network (CNN) models have been used to learn haze-related priors but they ultimately work as advanced image filters. In this paper we propose a novel semantic approach towards single image haze removal. Unlike existing methods, we infer color priors based on extracted semantic features. We argue that semantic context can be exploited to give informative cues for (a) learning color prior on clean image and (b) estimating ambient illumination. This design allowed our model to recover clean images from challenging cases with strong ambiguity, e.g. saturated illumination color and sky regions in image. In experiments, we validate our approach upon synthetic and real hazy images, where our method showed superior performance over state-of-the-art approaches, suggesting semantic information facilitates the haze removal task.",
"title": ""
},
{
"docid": "3afa5356d956e2a525836b873442aa6b",
"text": "The problem of secure data processing by means of a neural network (NN) is addressed. Secure processing refers to the possibility that the NN owner does not get any knowledge about the processed data since they are provided to him in encrypted format. At the same time, the NN itself is protected, given that its owner may not be willing to disclose the knowledge embedded within it. The considered level of protection ensures that the data provided to the network and the network weights and activation functions are kept secret. Particular attention is given to prevent any disclosure of information that could bring a malevolent user to get access to the NN secrets by properly inputting fake data to any point of the proposed protocol. With respect to previous works in this field, the interaction between the user and the NN owner is kept to a minimum with no resort to multiparty computation protocols.",
"title": ""
},
{
"docid": "8fb37cad9ad964598ed718f0c32eaff1",
"text": "A planar W-band monopulse antenna array is designed based on the substrate integrated waveguide (SIW) technology. The sum-difference comparator, 16-way divider and 32 × 32 slot array antenna are all integrated on a single dielectric substrate in the compact layout through the low-cost PCB process. Such a substrate integrated monopulse array is able to operate over 93 ~ 96 GHz with narrow-beam and high-gain. The maximal gain is measured to be 25.8 dBi, while the maximal null-depth is measured to be - 43.7 dB. This SIW monopulse antenna not only has advantages of low-cost, light, easy-fabrication, etc., but also has good performance validated by measurements. It presents an excellent candidate for W-band directional-finding systems.",
"title": ""
},
{
"docid": "7f27e9b29e6ed2800ef850e6022d29ba",
"text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.",
"title": ""
},
{
"docid": "7e325afeaaf3cc548bca023e35fbd203",
"text": "The short length of the estrous cycle of rats makes them ideal for investigation of changes occurring during the reproductive cycle. The estrous cycle lasts four days and is characterized as: proestrus, estrus, metestrus and diestrus, which may be determined according to the cell types observed in the vaginal smear. Since the collection of vaginal secretion and the use of stained material generally takes some time, the aim of the present work was to provide researchers with some helpful considerations about the determination of the rat estrous cycle phases in a fast and practical way. Vaginal secretion of thirty female rats was collected every morning during a month and unstained native material was observed using the microscope without the aid of the condenser lens. Using the 10 x objective lens, it was easier to analyze the proportion among the three cellular types, which are present in the vaginal smear. Using the 40 x objective lens, it is easier to recognize each one of these cellular types. The collection of vaginal lavage from the animals, the observation of the material, in the microscope, and the determination of the estrous cycle phase of all the thirty female rats took 15-20 minutes.",
"title": ""
},
{
"docid": "5df22a15a1bd768782214647b1b87ebe",
"text": "Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.",
"title": ""
},
{
"docid": "acdd0043b764fe8bb9904ea6ca71e5cf",
"text": "We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50% relative improvement in pose estimation accuracy over a stateof-the-art method.",
"title": ""
}
] | scidocsrr |
8c75c8b4274533b14d267aed457d651c | Building Neuromorphic Circuits with Memristive Devices | [
{
"docid": "5deaf3ef06be439ad0715355d3592cff",
"text": "Hybrid reconfigurable logic circuits were fabricated by integrating memristor-based crossbars onto a foundry-built CMOS (complementary metal-oxide-semiconductor) platform using nanoimprint lithography, as well as materials and processes that were compatible with the CMOS. Titanium dioxide thin-film memristors served as the configuration bits and switches in a data routing network and were connected to gate-level CMOS components that acted as logic elements, in a manner similar to a field programmable gate array. We analyzed the chips using a purpose-built testing system, and demonstrated the ability to configure individual devices, use them to wire up various logic gates and a flip-flop, and then reconfigure devices.",
"title": ""
}
] | [
{
"docid": "0ab14a40df6fe28785262d27a4f5b8ce",
"text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.",
"title": ""
},
{
"docid": "d5ac5e10fc2cc61e625feb28fc9095b5",
"text": "Article history: Received 8 July 2016 Received in revised form 15 November 2016 Accepted 29 December 2016 Available online 25 January 2017 As part of the post-2015 United Nations sustainable development agenda, the world has its first urban sustainable development goal (USDG) “to make cities and human settlements inclusive, safe, resilient and sustainable”. This paper provides an overview of the USDG and explores some of the difficulties around using this goal as a tool for improving cities. We argue that challenges emerge around selecting the indicators in the first place and also around the practical use of these indicators once selected. Three main practical problems of indicator use include 1) the poor availability of standardized, open and comparable data 2) the lack of strong data collection institutions at the city scale to support monitoring for the USDG and 3) “localization” the uptake and context specific application of the goal by diverse actors in widely different cities. Adding to the complexity, the USDG conversation is taking place at the same time as the proliferation of a bewildering array of indicator systems at different scales. Prompted by technological change, debates on the “data revolution” and “smart city” also have direct bearing on the USDG. We argue that despite these many complexities and challenges, the USDG framework has the potential to encourage and guide needed reforms in our cities but only if anchored in local institutions and initiatives informed by open, inclusive and contextually sensitive data collection and monitoring. © 2017 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "f86e3894a6c61c3734e1aabda3500ef0",
"text": "We perform sensitivity analyses on a mathematical model of malaria transmission to determine the relative importance of model parameters to disease transmission and prevalence. We compile two sets of baseline parameter values: one for areas of high transmission and one for low transmission. We compute sensitivity indices of the reproductive number (which measures initial disease transmission) and the endemic equilibrium point (which measures disease prevalence) to the parameters at the baseline values. We find that in areas of low transmission, the reproductive number and the equilibrium proportion of infectious humans are most sensitive to the mosquito biting rate. In areas of high transmission, the reproductive number is again most sensitive to the mosquito biting rate, but the equilibrium proportion of infectious humans is most sensitive to the human recovery rate. This suggests strategies that target the mosquito biting rate (such as the use of insecticide-treated bed nets and indoor residual spraying) and those that target the human recovery rate (such as the prompt diagnosis and treatment of infectious individuals) can be successful in controlling malaria.",
"title": ""
},
{
"docid": "15da6453d3580a9f26ecb79f9bc8e270",
"text": "In 2005 the Commission for Africa noted that ‘Tackling HIV and AIDS requires a holistic response that recognises the wider cultural and social context’ (p. 197). Cultural factors that range from beliefs and values regarding courtship, sexual networking, contraceptive use, perspectives on sexual orientation, explanatory models for disease and misfortune and norms for gender and marital relations have all been shown to be factors in the various ways that HIV/AIDS has impacted on African societies (UNESCO, 2002). Increasingly the centrality of culture is being recognised as important to HIV/AIDS prevention, treatment, care and support. With culture having both positive and negative influences on health behaviour, international donors and policy makers are beginning to acknowledge the need for cultural approaches to the AIDS crisis (Nguyen et al., 2008). The development of cultural approaches to HIV/AIDS presents two major challenges for South Africa. First, the multi-cultural nature of the country means that there is no single sociocultural context in which the HIV/AIDS epidemic is occurring. South Africa is home to a rich tapestry of racial, ethnic, religious and linguistic groups. As a result of colonial history and more recent migration, indigenous Africans have come to live alongside large populations of people with European, Asian and mixed descent, all of whom could lay claim to distinctive cultural practices and spiritual beliefs. Whilst all South Africans are affected by the spread of HIV, the burden of the disease lies with the majority black African population (see Shisana et al., 2005; UNAIDS, 2007). Therefore, this chapter will focus on some sociocultural aspects of life within the majority black African population of South Africa, most of whom speak languages that are classified within the broad linguistic grouping of Bantu languages. This large family of linguistically related ethnic groups span across southern Africa and comprise the bulk of the African people who reside in South Africa today (Hammond-Tooke, 1974). A second challenge involves the legitimacy of the culture concept. Whilst race was used in apartheid as the rationale for discrimination, notions of culture and cultural differences were legitimised by segregating the country into various ‘homelands’. Within the homelands, the majority black South Africans could presumably",
"title": ""
},
{
"docid": "bc6be8b5fd426e7f8d88645a2b21ff6a",
"text": "irtually everyone would agree that a primary, yet insufficiently met, goal of schooling is to enable students to think critically. In layperson’s terms, critical thinking consists of seeing both sides of an issue, being open to new evidence that disconfirms your ideas, reasoning dispassionately, demanding that claims be backed by evidence, deducing and inferring conclusions from available facts, solving problems, and so forth. Then too, there are specific types of critical thinking that are characteristic of different subject matter: That’s what we mean when we refer to “thinking like a scientist” or “thinking like a historian.” This proper and commonsensical goal has very often been translated into calls to teach “critical thinking skills” and “higher-order thinking skills”—and into generic calls for teaching students to make better judgments, reason more logically, and so forth. In a recent survey of human resource officials and in testimony delivered just a few months ago before the Senate Finance Committee, business leaders have repeatedly exhorted schools to do a better job of teaching students to think critically. And they are not alone. Organizations and initiatives involved in education reform, such as the National Center on Education and the Economy, the American Diploma Project, and the Aspen Institute, have pointed out the need for students to think and/or reason critically. The College Board recently revamped the SAT to better assess students’ critical thinking. And ACT, Inc. offers a test of critical thinking for college students. These calls are not new. In 1983, A Nation At Risk, a report by the National Commission on Excellence in Education, found that many 17-year-olds did not possess the “‘higher-order’ intellectual skills” this country needed. It claimed that nearly 40 percent could not draw inferences from written material and only onefifth could write a persuasive essay. Following the release of A Nation At Risk, programs designed to teach students to think critically across the curriculum became extremely popular. By 1990, most states had initiatives designed to encourage educators to teach critical thinking, and one of the most widely used programs, Tactics for Thinking, sold 70,000 teacher guides. But, for reasons I’ll explain, the programs were not very effective—and today we still lament students’ lack of critical thinking. After more than 20 years of lamentation, exhortation, and little improvement, maybe it’s time to ask a fundamental question: Can critical thinking actually be taught? Decades of cognitive research point to a disappointing answer: not really. People who have sought to teach critical thinking have assumed that it is a skill, like riding a bicycle, and that, like other skills, once you learn it, you can apply it in any situation. Research from cognitive science shows that thinking is not that sort of skill. The processes of thinking are intertwined with the content of thought (that is, domain knowledge). Thus, if you remind a student to “look at an issue from multiple perspectives” often enough, he will learn that he ought to do so, but if he doesn’t know much about Critical Thinking",
"title": ""
},
{
"docid": "bcd81794f9e1fc6f6b92fd36ccaa8dac",
"text": "Reliable detection and avoidance of obstacles is a crucial prerequisite for autonomously navigating robots as both guarantee safety and mobility. To ensure safe mobility, the obstacle detection needs to run online, thereby taking limited resources of autonomous systems into account. At the same time, robust obstacle detection is highly important. Here, a too conservative approach might restrict the mobility of the robot, while a more reckless one might harm the robot or the environment it is operating in. In this paper, we present a terrain-adaptive approach to obstacle detection that relies on 3D-Lidar data and combines computationally cheap and fast geometric features, like step height and steepness, which are updated with the frequency of the lidar sensor, with semantic terrain information, which is updated with at lower frequency. We provide experiments in which we evaluate our approach on a real robot on an autonomous run over several kilometers containing different terrain types. The experiments demonstrate that our approach is suitable for autonomous systems that have to navigate reliable on different terrain types including concrete, dirt roads and grass.",
"title": ""
},
{
"docid": "fc5f80f0554d248524f2aa67ad628773",
"text": "Personality plays an important role in the way people manage the images they convey in self-presentations and employment interviews, trying to affect the other\"s first impressions and increase effectiveness. This paper addresses the automatically detection of the Big Five personality traits from short (30-120 seconds) self-presentations, by investigating the effectiveness of 29 simple acoustic and visual non-verbal features. Our results show that Conscientiousness and Emotional Stability/Neuroticism are the best recognizable traits. The lower accuracy levels for Extraversion and Agreeableness are explained through the interaction between situational characteristics and the differential activation of the behavioral dispositions underlying those traits.",
"title": ""
},
{
"docid": "969ba9848fa6d02f74dabbce2f1fe3ab",
"text": "With the rapid growth of social media, massive misinformation is also spreading widely on social media, e.g., Weibo and Twitter, and brings negative effects to human life. Today, automatic misinformation identification has drawn attention from academic and industrial communities. Whereas an event on social media usually consists of multiple microblogs, current methods are mainly constructed based on global statistical features. However, information on social media is full of noise, which should be alleviated. Moreover, most of the microblogs about an event have little contribution to the identification of misinformation, where useful information can be easily overwhelmed by useless information. Thus, it is important to mine significant microblogs for constructing a reliable misinformation identification method. In this article, we propose an attention-based approach for identification of misinformation (AIM). Based on the attention mechanism, AIM can select microblogs with the largest attention values for misinformation identification. The attention mechanism in AIM contains two parts: content attention and dynamic attention. Content attention is the calculated-based textual features of each microblog. Dynamic attention is related to the time interval between the posting time of a microblog and the beginning of the event. To evaluate AIM, we conduct a series of experiments on the Weibo and Twitter datasets, and the experimental results show that the proposed AIM model outperforms the state-of-the-art methods.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "08ab7142ae035c3594d3f3ae339d3e27",
"text": "Sudoku is a very popular puzzle which consists of placing several numbers in a squared grid according to some simple rules. In this paper, we present a Sudoku solving technique named Boolean Sudoku Solver (BSS) using only simple Boolean algebras. Use of Boolean algebra increases the execution speed of the Sudoku solver. Simulation results show that our method returns the solution of the Sudoku in minimum number of iterations and outperforms the existing popular approaches.",
"title": ""
},
{
"docid": "51a859f71bd2ec82188826af18204f02",
"text": "This study examines the accuracy of 54 online dating photographs posted by heterosexual daters. We report data on (a1) online daters’ self-reported accuracy, (b) independent judges’ perceptions of accuracy, and (c) inconsistencies in the profile photograph identified by trained coders. While online daters rated their photos as relatively accurate, independent judges rated approximately 1/3 of the photographs as not accurate. Female photographs were judged as less accurate than male photographs, and were more likely to be older, to be retouched or taken by a professional photographer, and to contain inconsistencies, including changes in hair style and skin quality. The findings are discussed in terms of the tensions experienced by online daters to (a) enhance their physical attractiveness and (b) present a photograph that would not be judged deceptive in subsequent face-to-face meetings. The paper extends the theoretical concept of selective self-presentation to online photographs, and discusses issues of self-deception and social desirability bias.",
"title": ""
},
{
"docid": "ac4c1d903e20b90da555b11ef2edd2f5",
"text": "Program translation is an important tool to migrate legacy code in one language into an ecosystem built in a different language. In this work, we are the first to employ deep neural networks toward tackling this problem. We observe that program translation is a modular procedure, in which a sub-tree of the source tree is translated into the corresponding target sub-tree at each step. To capture this intuition, we design a tree-to-tree neural network to translate a source tree into a target one. Meanwhile, we develop an attention mechanism for the tree-to-tree model, so that when the decoder expands one non-terminal in the target tree, the attention mechanism locates the corresponding sub-tree in the source tree to guide the expansion of the decoder. We evaluate the program translation capability of our tree-to-tree model against several state-of-the-art approaches. Compared against other neural translation models, we observe that our approach is consistently better than the baselines with a margin of up to 15 points. Further, our approach can improve the previous state-of-the-art program translation approaches by a margin of 20 points on the translation of real-world projects.",
"title": ""
},
{
"docid": "194156892cbdb0161e9aae6a01f78703",
"text": "Model repositories play a central role in the model driven development of complex software-intensive systems by offering means to persist and manipulate models obtained from heterogeneous languages and tools. Complex models can be assembled by interconnecting model fragments by hard links, i.e., regular references, where the target end points to external resources using storage-specific identifiers. This approach, in certain application scenarios, may prove to be a too rigid and error prone way of interlinking models. As a flexible alternative, we propose to combine derived features with advanced incremental model queries as means for soft interlinking of model elements residing in different model resources. These soft links can be calculated on-demand with graceful handling for temporarily unresolved references. In the background, the links are maintained efficiently and flexibly by using incremental model query evaluation. The approach is applicable to modeling environments or even property graphs for representing query results as first-class relations, which also allows the chaining of soft links that is useful for modular applications. The approach is evaluated using the Eclipse Modeling Framework (EMF) and EMF-IncQuery in two complex industrial case studies. The first case study is motivated by a knowledge management project from the financial domain, involving a complex interlinked structure of concept and business process models. The second case study is set in the avionics domain with strict traceability requirements enforced by certification standards (DO-178b). It consists of multiple domain models describing the allocation scenario of software functions to hardware components.",
"title": ""
},
{
"docid": "16c56a9ca685cb1100d175268b6e8ba6",
"text": "In this paper, we study the stochastic gradient descent method in analyzing nonconvex statistical optimization problems from a diffusion approximation point of view. Using the theory of large deviation of random dynamical system, we prove in the small stepsize regime and the presence of omnidirectional noise the following: starting from a local minimizer (resp. saddle point) the SGD iteration escapes in a number of iteration that is exponentially (resp. linearly) dependent on the inverse stepsize. We take the deep neural network as an example to study this phenomenon. Based on a new analysis of the mixing rate of multidimensional Ornstein-Uhlenbeck processes, our theory substantiate a very recent empirical results by Keskar et al. (2016), suggesting that large batch sizes in training deep learning for synchronous optimization leads to poor generalization error.",
"title": ""
},
{
"docid": "727a97b993098aa1386e5bfb11a99d4b",
"text": "Inevitably, reading is one of the requirements to be undergone. To improve the performance and quality, someone needs to have something new every day. It will suggest you to have more inspirations, then. However, the needs of inspirations will make you searching for some sources. Even from the other people experience, internet, and many books. Books and internet are the recommended media to help you improving your quality and performance.",
"title": ""
},
{
"docid": "e0fc6fc1425bb5786847c3769c1ec943",
"text": "Developing manufacturing simulation models usually requires experts with knowledge of multiple areas including manufacturing, modeling, and simulation software. The expertise requirements increase for virtual factory models that include representations of manufacturing at multiple resolution levels. This paper reports on an initial effort to automatically generate virtual factory models using manufacturing configuration data in standard formats as the primary input. The execution of the virtual factory generates time series data in standard formats mimicking a real factory. Steps are described for auto-generation of model components in a software environment primarily oriented for model development via a graphic user interface. Advantages and limitations of the approach and the software environment used are discussed. The paper concludes with a discussion of challenges in verification and validation of the virtual factory prototype model with its multiple hierarchical models and future directions.",
"title": ""
},
{
"docid": "0a5e2cc403ba9a4397d04c084b25f43e",
"text": "Ebola virus disease (EVD) distinguishes its feature as high infectivity and mortality. Thus, it is urgent for governments to draw up emergency plans against Ebola. However, it is hard to predict the possible epidemic situations in practice. Luckily, in recent years, computational experiments based on artificial society appeared, providing a new approach to study the propagation of EVD and analyze the corresponding interventions. Therefore, the rationality of artificial society is the key to the accuracy and reliability of experiment results. Individuals' behaviors along with travel mode directly affect the propagation among individuals. Firstly, artificial Beijing is reconstructed based on geodemographics and machine learning is involved to optimize individuals' behaviors. Meanwhile, Ebola course model and propagation model are built, according to the parameters in West Africa. Subsequently, propagation mechanism of EVD is analyzed, epidemic scenario is predicted, and corresponding interventions are presented. Finally, by simulating the emergency responses of Chinese government, the conclusion is finally drawn that Ebola is impossible to outbreak in large scale in the city of Beijing.",
"title": ""
},
{
"docid": "58c4c9bd2033645ece7db895d368cda6",
"text": "Nanorobotics is the technology of creating machines or robots of the size of few hundred nanometres and below consisting of components of nanoscale or molecular size. There is an all around development in nanotechnology towards realization of nanorobots in the last two decades. In the present work, the compilation of advancement in nanotechnology in context to nanorobots is done. The challenges and issues in movement of a nanorobot and innovations present in nature to overcome the difficulties in moving at nano-size regimes are discussed. The efficiency aspect in context to artificial nanorobot is also presented.",
"title": ""
},
{
"docid": "bb01b5e24d7472ab52079dcb8a65358d",
"text": "There are plenty of classification methods that perform well when training and testing data are drawn from the same distribution. However, in real applications, this condition may be violated, which causes degradation of classification accuracy. Domain adaptation is an effective approach to address this problem. In this paper, we propose a general domain adaptation framework from the perspective of prediction reweighting, from which a novel approach is derived. Different from the major domain adaptation methods, our idea is to reweight predictions of the training classifier on testing data according to their signed distance to the domain separator, which is a classifier that distinguishes training data (from source domain) and testing data (from target domain). We then propagate the labels of target instances with larger weights to ones with smaller weights by introducing a manifold regularization method. It can be proved that our reweighting scheme effectively brings the source and target domains closer to each other in an appropriate sense, such that classification in target domain becomes easier. The proposed method can be implemented efficiently by a simple two-stage algorithm, and the target classifier has a closed-form solution. The effectiveness of our approach is verified by the experiments on artificial datasets and two standard benchmarks, a visual object recognition task and a cross-domain sentiment analysis of text. Experimental results demonstrate that our method is competitive with the state-of-the-art domain adaptation algorithms.",
"title": ""
}
] | scidocsrr |
d902b33a1bb72273c6bbe7750eeac7dd | How to Measure Motivation : A Guide for the Experimental Social Psychologist | [
{
"docid": "3efb43150881649d020a0c721dc39ae5",
"text": "Six studies explore the role of goal shielding in self-regulation by examining how the activation of focal goals to which the individual is committed inhibits the accessibility of alternative goals. Consistent evidence was found for such goal shielding, and a number of its moderators were identified: Individuals' level of commitment to the focal goal, their degree of anxiety and depression, their need for cognitive closure, and differences in their goal-related tenacity. Moreover, inhibition of alternative goals was found to be more pronounced when they serve the same overarching purpose as the focal goal, but lessened when the alternative goals facilitate focal goal attainment. Finally, goal shielding was shown to have beneficial consequences for goal pursuit and attainment.",
"title": ""
}
] | [
{
"docid": "6cbdb95791cc214a1b977e92e69904bb",
"text": "We study reinforcement learning of chat-bots with recurrent neural network architectures when the rewards are noisy and expensive to obtain. For instance, a chat-bot used in automated customer service support can be scored by quality assurance agents, but this process can be expensive, time consuming and noisy. Previous reinforcement learning work for natural language processing uses onpolicy updates and/or is designed for on-line learning settings. We demonstrate empirically that such strategies are not appropriate for this setting and develop an off-policy batch policy gradient method (BPG). We demonstrate the efficacy of our method via a series of synthetic experiments and an Amazon Mechanical Turk experiment on a restaurant recommendations dataset.",
"title": ""
},
{
"docid": "553b72da13c28e56822ccc900ff114fa",
"text": "This paper presents some of the unique verification, validation, and certification challenges that must be addressed during the development of adaptive system software for use in safety-critical aerospace applications. The paper first discusses the challenges imposed by the current regulatory guidelines for aviation software. Next, a number of individual technologies being researched by NASA and others are discussed that focus on various aspects of the software challenges. These technologies include the formal methods of model checking, compositional verification, static analysis, program synthesis, and runtime analysis. Then the paper presents some validation challenges for adaptive control, including proving convergence over long durations, guaranteeing controller stability, using new tools to compute statistical error bounds, identifying problems in fault-tolerant software, and testing in the presence of adaptation. These specific challenges are presented in the context of a software validation effort in testing the Integrated Flight Control System (IFCS) neural control software at the Dryden Flight Research Center. Lastly, the challenges to develop technologies to help prevent aircraft system failures, detect and identify failures that do occur, and provide enhanced guidance and control capability to prevent and recover from vehicle loss of control are briefly cited in connection with ongoing work at the NASA Langley Research Center.",
"title": ""
},
{
"docid": "23ef781d3230124360f24cc6e38fb15f",
"text": "Exploration of ANNs for the economic purposes is described and empirically examined with the foreign exchange market data. For the experiments, panel data of the exchange rates (USD/EUR, JPN/USD, USD/ GBP) are examined and optimized to be used for time-series predictions with neural networks. In this stage the input selection, in which the processing steps to prepare the raw data to a suitable input for the models are investigated. The best neural network is found with the best forecasting abilities, based on a certain performance measure. A visual graphs on the experiments data set is presented after processing steps, to illustrate that particular results. The out-of-sample results are compared with training ones. & 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "5d5c2016ec936969d3d3a07e0b48f51e",
"text": "In an Information technology world, the ability to effectively process massive datasets has become integral to a broad range of scientific and other academic disciplines. We are living in an era of data deluge and as a result, the term “Big Data” is appearing in many contexts. It ranges from meteorology, genomics, complex physics simulations, biological and environmental research, finance and business to healthcare. Big Data refers to data streams of higher velocity and higher variety. The infrastructure required to support the acquisition of Big Data must deliver low, predictable latency in both capturing data and in executing short, simple queries. To be able to handle very high transaction volumes, often in a distributed environment; and support flexible, dynamic data structures. Data processing is considerably more challenging than simply locating, identifying, understanding, and citing data. For effective large-scale analysis all of this has to happen in a completely automated manner. This requires differences in data structure and semantics to be expressed in forms that are computer understandable, and then “robotically” resolvable. There is a strong body of work in data integration, mapping and transformations. However, considerable additional work is required to achieve automated error-free difference resolution. This paper proposes a framework on recent research for the Data Mining using Big Data.",
"title": ""
},
{
"docid": "5525b8ddce9a8a6430da93f48e93dea5",
"text": "One major goal of vision is to infer physical models of objects, surfaces, and their layout from sensors. In this paper, we aim to interpret indoor scenes from one RGBD image. Our representation encodes the layout of walls, which must conform to a Manhattan structure but is otherwise flexible, and the layout and extent of objects, modeled with CAD-like 3D shapes. We represent both the visible and occluded portions of the scene, producing a complete 3D parse. Such a scene interpretation is useful for robotics and visual reasoning, but difficult to produce due to the wellknown challenge of segmentation, the high degree of occlusion, and the diversity of objects in indoor scene. We take a data-driven approach, generating sets of potential object regions, matching to regions in training images, and transferring and aligning associated 3D models while encouraging fit to observations and overall consistency. We demonstrate encouraging results on the NYU v2 dataset and highlight a variety of interesting directions for future work.",
"title": ""
},
{
"docid": "40c93dacc8318bc440d23fedd2acbd47",
"text": "An electrical-balance duplexer uses series connected step-down transformers to enhance linearity and power handling capability by reducing the voltage swing across nonlinear components. Wideband, dual-notch Tx-to-Rx isolation is demonstrated experimentally with a planar inverted-F antenna. The 0.18μm CMOS prototype achieves >50dB isolation for 220MHz aggregated bandwidth or >40dB dual-notch isolation for 160MHz bandwidth, +49dBm Tx-path IIP3 and -48dBc ACLR1 for +27dBm at the antenna.",
"title": ""
},
{
"docid": "0b0723466d6fc726154befea8a1d7398",
"text": "● Volume of pages makes efficient WWW navigation difficult ● Aim: To analyse users' navigation history to generate tools that increase navigational efficiency – ie. Predictive server prefetching ● Provides a mathematical foundation to several concepts",
"title": ""
},
{
"docid": "e4f4fe27fff75bd7ed079f3094deaedb",
"text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.",
"title": ""
},
{
"docid": "9ac7dbae53fe06937780a53dd3432f80",
"text": "Artefact evaluation is regarded as being crucial for Design Science Research (DSR) in order to rigorously proof an artefact’s relevance for practice. The availability of guidelines for structuring DSR processes notwithstanding, the current body of knowledge provides only rudimentary means for a design researcher to select and justify appropriate artefact evaluation strategies in a given situation. This paper proposes patterns that could be used to articulate and justify artefact evaluation strategies within DSR projects. These patterns have been synthesised from priorDSR literature concerned with evaluation strategies. They distinguish both ex ante as well as ex post evaluations and reflect current DSR approaches and evaluation criteria.",
"title": ""
},
{
"docid": "c2a3344c607cf06c24ed8d2664243284",
"text": "It is common for cloud users to require clusters of inter-connected virtual machines (VMs) in a geo-distributed IaaS cloud, to run their services. Compared to isolated VMs, key challenges on dynamic virtual cluster (VC) provisioning (computation + communication resources) lie in two folds: (1) optimal placement of VCs and inter-VM traffic routing involve NP-hard problems, which are non-trivial to solve offline, not to mention if an online efficient algorithm is sought; (2) an efficient pricing mechanism is missing, which charges a market-driven price for each VC as a whole upon request, while maximizing system efficiency or provider revenue over the entire span. This paper proposes efficient online auction mechanisms to address the above challenges. We first design SWMOA, a novel online algorithm for dynamic VC provisioning and pricing, achieving truthfulness, individual rationality, computation efficiency, and <inline-formula><tex-math notation=\"LaTeX\">$(1+2\\log \\mu)$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq1-2601905.gif\"/></alternatives></inline-formula>-competitiveness in social welfare, where <inline-formula><tex-math notation=\"LaTeX\">$\\mu$</tex-math><alternatives> <inline-graphic xlink:href=\"wu-ieq2-2601905.gif\"/></alternatives></inline-formula> is related to the problem size. Next, applying a randomized reduction technique, we convert the social welfare maximizing auction into a revenue maximizing online auction, PRMOA, achieving <inline-formula><tex-math notation=\"LaTeX\">$O(\\log \\mu)$ </tex-math><alternatives><inline-graphic xlink:href=\"wu-ieq3-2601905.gif\"/></alternatives></inline-formula> -competitiveness in provider revenue, as well as truthfulness, individual rationality and computation efficiency. We investigate auction design in different cases of resource cost functions in the system. We validate the efficacy of the mechanisms through solid theoretical analysis and trace-driven simulations.",
"title": ""
},
{
"docid": "dd53308cc19f85e2a7ab2e379e196b6c",
"text": "Due to the increasingly aging population, there is a rising demand for assistive living technologies for the elderly to ensure their health and well-being. The elderly are mostly chronic patients who require frequent check-ups of multiple vital signs, some of which (e.g., blood pressure and blood glucose) vary greatly according to the daily activities that the elderly are involved in. Therefore, the development of novel wearable intelligent systems to effectively monitor the vital signs continuously over a 24 hour period is in some cases crucial for understanding the progression of chronic symptoms in the elderly. In this paper, recent development of Wearable Intelligent Systems for e-Health (WISEs) is reviewed, including breakthrough technologies and technical challenges that remain to be solved. A novel application of wearable technologies for transient cardiovascular monitoring during water drinking is also reported. In particular, our latest results found that heart rate increased by 9 bpm (P < 0.001) and pulse transit time was reduced by 5 ms (P < 0.001), indicating a possible rise in blood pressure, during swallowing. In addition to monitoring physiological conditions during daily activities, it is anticipated that WISEs will have a number of other potentially viable applications, including the real-time risk prediction of sudden cardiovascular events and deaths. Category: Smart and intelligent computing",
"title": ""
},
{
"docid": "1c77e4e01e20b33aca309adabb37868d",
"text": "From the automated text processing point of view, natural language is very redundant in the sense that many different words share a common or similar meaning. For computer this can be hard to understand without some background knowledge. Latent Semantic Indexing (LSI) is a technique that helps in extracting some of this background knowledge from corpus of text documents. This can be also viewed as extraction of hidden semantic concepts from text documents. On the other hand visualization can be very helpful in data analysis, for instance, for finding main topics that appear in larger sets of documents. Extraction of main concepts from documents using techniques such as LSI, can make the results of visualizations more useful. For example, given a set of descriptions of European Research projects (6FP) one can find main areas that these projects cover including semantic web, e-learning, security, etc. In this paper we describe a method for visualization of document corpus based on LSI, the system implementing it and give results of using the system on several datasets.",
"title": ""
},
{
"docid": "98f052bd353437e70b4ccc15d933d961",
"text": "Current cloud providers use fixed-price based mechanisms to allocate Virtual Machine (VM) instances to their users. The fixed-price based mechanisms do not provide an efficient allocation of resources and do not maximize the revenue of the cloud providers. A better alternative would be to use combinatorial auction-based resource allocation mechanisms. In this PhD dissertation we will design, study and implement combinatorial auction-based mechanisms for efficient provisioning and allocation of VM instances in cloud computing environments. We present our preliminary results consisting of three combinatorial auction-based mechanisms for VM provisioning and allocation. We also present an efficient bidding algorithm that can be used by the cloud users to decide on how to bid for their requested bundles of VM instances.",
"title": ""
},
{
"docid": "ef142067a29f8662e36d68ee37c07bce",
"text": "The lack of assessment tools to analyze serious games and insufficient knowledge on their impact on players is a recurring critique in the field of game and media studies, education science and psychology. Although initial empirical studies on serious games usage deliver discussable results, numerous questions remain unacknowledged. In particular, questions regarding the quality of their formal conceptual design in relation to their purpose mostly stay uncharted. In the majority of cases the designers' good intentions justify incoherence and insufficiencies in their design. In addition, serious games are mainly assessed in terms of the quality of their content, not in terms of their intention-based design. This paper argues that analyzing a game's formal conceptual design, its elements, and their relation to each other based on the game's purpose is a constructive first step in assessing serious games. By outlining the background of the Serious Game Design Assessment Framework and exemplifying its use, a constructive structure to examine purpose-based games is introduced. To demonstrate how to assess the formal conceptual design of serious games we applied the SGDA Framework to the online games \"Sweatshop\" (2011) and \"ICED\" (2008).",
"title": ""
},
{
"docid": "566144a980fe85005f7434f7762bfeb9",
"text": "This article describes the rationale, development, and validation of the Scale for Suicide Ideation (SSI), a 19-item clinical research instrument designed to quantify and assess suicidal intention. The scale was found to have high internal consistency and moderately high correlations with clinical ratings of suicidal risk and self-administered measures of self-harm. Furthermore, it was sensitive to changes in levels of depression and hopelessness over time. Its construct validity was supported by two studies by different investigators testing the relationship between hopelessness, depression, and suicidal ideation and by a study demonstrating a significant relationship between high level of suicidal ideation and \"dichotomous\" attitudes about life and related concepts on a semantic differential test. Factor analysis yielded three meaningful factors: active suicidal desire, specific plans for suicide, and passive suicidal desire.",
"title": ""
},
{
"docid": "176c9231f27d22658be5107a74ab2f32",
"text": "The emerging ambient persuasive technology looks very promising for many areas of personal and ubiquitous computing. Persuasive applications aim at changing human attitudes or behavior through the power of software designs. This theory-creating article suggests the concept of a behavior change support system (BCSS), whether web-based, mobile, ubiquitous, or more traditional information system to be treated as the core of research into persuasion, influence, nudge, and coercion. This article provides a foundation for studying BCSSs, in which the key constructs are the O/C matrix and the PSD model. It will (1) introduce the archetypes of behavior change via BCSSs, (2) describe the design process for building persuasive BCSSs, and (3) exemplify research into BCSSs through the domain of health interventions. Recognizing the themes put forward in this article will help leverage the full potential of computing for producing behavioral changes.",
"title": ""
},
{
"docid": "11ae42bedc18dedd0c29004000a4ec00",
"text": "A hand injury can have great impact on a person's daily life. However, the current manual evaluations of hand functions are imprecise and inconvenient. In this research, a data glove embedded with 6-axis inertial sensors is proposed. With the proposed angle calculating algorithm, accurate bending angles are measured to estimate the real-time movements of hands. This proposed system can provide physicians with an efficient tool to evaluate the recovery of patients and improve the quality of hand rehabilitation.",
"title": ""
},
{
"docid": "39be1d73b84872b0ae1d61bbd0fc96f8",
"text": "Annotating data is a common bottleneck in building text classifiers. This is particularly problematic in social media domains, where data drift requires frequent retraining to maintain high accuracy. In this paper, we propose and evaluate a text classification method for Twitter data whose only required human input is a single keyword per class. The algorithm proceeds by identifying exemplar Twitter accounts that are representative of each class by analyzing Twitter Lists (human-curated collections of related Twitter accounts). A classifier is then fit to the exemplar accounts and used to predict labels of new tweets and users. We develop domain adaptation methods to address the noise and selection bias inherent to this approach, which we find to be critical to classification accuracy. Across a diverse set of tasks (topic, gender, and political affiliation classification), we find that the resulting classifier is competitive with a fully supervised baseline, achieving superior accuracy on four of six datasets despite using no manually labeled data.",
"title": ""
},
{
"docid": "f1d4323cbabd294723a2fd68321ad640",
"text": "Mycosis fungoides (MF), a low-grade lymphoproliferative disorder, is the most common type of cutaneous T-cell lymphoma. Typically, neoplastic T cells localize to the skin and produce patches, plaques, tumours or erythroderma. Diagnosis of MF can be difficult due to highly variable presentations and the sometimes nonspecific nature of histological findings. Molecular biology has improved the diagnostic accuracy. Nevertheless, clinical experience is of substantial importance as MF can resemble a wide variety of skin diseases. We performed a literature review and found that MF can mimic >50 different clinical entities. We present a structured framework of clinical variations of classical, unusual and distinct forms of MF. Distinct subforms such as ichthyotic MF, adnexotropic (including syringotropic and folliculotropic) MF, MF with follicular mucinosis, granulomatous MF with granulomatous slack skin and papuloerythroderma of Ofuji are delineated in more detail.",
"title": ""
},
{
"docid": "5c74348ce0028786990b4ca39b1e858d",
"text": "The terminology Internet of Things (IoT) refers to a future where every day physical objects are connected by the Internet in one form or the other, but outside the traditional desktop realm. The successful emergence of the IoT vision, however, will require computing to extend past traditional scenarios involving portables and smart-phones to the connection of everyday physical objects and the integration of intelligence with the environment. Subsequently, this will lead to the development of new computing features and challenges. The main purpose of this paper, therefore, is to investigate the features, challenges, and weaknesses that will come about, as the IoT becomes reality with the connection of more and more physical objects. Specifically, the study seeks to assess emergent challenges due to denial of service attacks, eavesdropping, node capture in the IoT infrastructure, and physical security of the sensors. We conducted a literature review about IoT, their features, challenges, and vulnerabilities. The methodology paradigm used was qualitative in nature with an exploratory research design, while data was collected using the desk research method. We found that, in the distributed form of architecture in IoT, attackers could hijack unsecured network devices converting them into bots to attack third parties. Moreover, attackers could target communication channels and extract data from the information flow. Finally, the perceptual layer in distributed IoT architecture is also found to be vulnerable to node capture attacks, including physical capture, brute force attack, DDoS attacks, and node privacy leaks.",
"title": ""
}
] | scidocsrr |
463cfc839609d32f61e48ffd239310f4 | Centering Theory in Spanish: Coding Manual | [
{
"docid": "c1e39be2fa21a4f47d163c1407490dc8",
"text": "Most existing anaphora resolution algorithms are designed to account only for anaphors with NP-antecedents. This paper describes an algorithm for the resolution of discourse deictic anaphors, which constitute a large percentage of anaphors in spoken dialogues. The success of the resolution is dependent on the classification of all pronouns and demonstratives into individual, discourse deictic and vague anaphora. Finally, the empirical results of the application of the algorithm to a corpus of spoken dialogues are presented.",
"title": ""
}
] | [
{
"docid": "b35922663b4728c409528675be15d586",
"text": "High-resolution screen printing of pristine graphene is introduced for the rapid fabrication of conductive lines on flexible substrates. Well-defined silicon stencils and viscosity-controlled inks facilitate the preparation of high-quality graphene patterns as narrow as 40 μm. This strategy provides an efficient method to produce highly flexible graphene electrodes for printed electronics.",
"title": ""
},
{
"docid": "6d5b7b5e1738993991a1344a1f584b68",
"text": "Smart route planning gathers increasing interest as cities become crowded and jammed. We present a system for individual trip planning that incorporates future traffic hazards in routing. Future traffic conditions are computed by a Spatio-Temporal Random Field based on a stream of sensor readings. In addition, our approach estimates traffic flow in areas with low sensor coverage using a Gaussian Process Regression. The conditioning of spatial regression on intermediate predictions of a discrete probabilistic graphical model allows to incorporate historical data, streamed online data and a rich dependency structure at the same time. We demonstrate the system and test model assumptions with a real-world use-case from Dublin city, Ireland.",
"title": ""
},
{
"docid": "14fb6228827657ba6f8d35d169ad3c63",
"text": "In a recent paper, the authors proposed a new class of low-complexity iterative thresholding algorithms for reconstructing sparse signals from a small set of linear measurements. The new algorithms are broadly referred to as AMP, for approximate message passing. This is the first of two conference papers describing the derivation of these algorithms, connection with the related literature, extensions of the original framework, and new empirical evidence. In particular, the present paper outlines the derivation of AMP from standard sum-product belief propagation, and its extension in several directions. We also discuss relations with formal calculations based on statistical mechanics methods.",
"title": ""
},
{
"docid": "c8768e560af11068890cc097f1255474",
"text": "Abstract This paper describes the functionality of MEAD, a comprehensive, public domain, open source, multidocument multilingual summarization environment that has been thus far downloaded by more than 500 organizations. MEAD has been used in a variety of summarization applications ranging from summarization for mobile devices to Web page summarization within a search engine and to novelty detection.",
"title": ""
},
{
"docid": "052ae69b1fe396f66cb4788372dc3c79",
"text": "Model transformation by example is a novel approach in model-driven software engineering to derive model transformation rules from an initial prototypical set of interrelated source and target models, which describe critical cases of the model transformation problem in a purely declarative way. In the current paper, we automate this approach using inductive logic programming (Muggleton and Raedt in J Logic Program 19-20:629–679, 1994) which aims at the inductive construction of first-order clausal theories from examples and background knowledge.",
"title": ""
},
{
"docid": "b3962fd4000fced796f3764d009c929e",
"text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.",
"title": ""
},
{
"docid": "50570741405703e6b47d285237b6eeed",
"text": "The knowledge base is a machine-readable set of knowledge. More and more multi-domain and large-scale knowledge bases have emerged in recent years, and they play an essential role in many information systems and semantic annotation tasks. However we do not have a perfect knowledge base yet and maybe we will never have a perfect one, because all the knowledge bases have limited coverage while new knowledge continues to emerge. Therefore populating and enriching the existing knowledge base become important tasks. Traditional knowledge base population task usually leverages the information embedded in the unstructured free text. Recently researchers found that massive structured tables on the Web are high-quality relational data and easier to be utilized than the unstructured text. Our goal of this paper is to enrich the knowledge base using Wikipedia tables. Here, knowledge means binary relations between entities and we focus on the relations in some specific domains. There are two basic types of information can be used in this task: the existing relation instances and the connection between types and relations. We firstly propose two basic probabilistic models based on two types of information respectively. Then we propose a light-weight aggregated model to combine the advantages of basic models. The experimental results show that our method is an effective approach to enriching the knowledge base with both high precision and recall.",
"title": ""
},
{
"docid": "5bc1c336b8e495e44649365f11af4ab8",
"text": "Convolutional neural networks (CNN) are limited by the lack of capability to handle geometric information due to the fixed grid kernel structure. The availability of depth data enables progress in RGB-D semantic segmentation with CNNs. State-of-the-art methods either use depth as additional images or process spatial information in 3D volumes or point clouds. These methods suffer from high computation and memory cost. To address these issues, we present Depth-aware CNN by introducing two intuitive, flexible and effective operations: depth-aware convolution and depth-aware average pooling. By leveraging depth similarity between pixels in the process of information propagation, geometry is seamlessly incorporated into CNN. Without introducing any additional parameters, both operators can be easily integrated into existing CNNs. Extensive experiments and ablation studies on challenging RGB-D semantic segmentation benchmarks validate the effectiveness and flexibility of our approach.",
"title": ""
},
{
"docid": "c9e11acaa2fbee77d079ecafbb9ae93a",
"text": "Alcohol consumption is highly prevalent in university students. Early detection in future health professionals is important: their consumption might not only influence their own health but may determine how they deal with the implementation of preventive strategies in the future. The aim of this paper is to detect the prevalence of risky alcohol consumption in first- and last-degree year students and to compare their drinking patterns.Risky drinking in pharmacy students (n=434) was assessed and measured with the AUDIT questionnaire (Alcohol Use Disorders Identification Test). A comparative analysis between college students from the first and fifth years of the degree in pharmacy, and that of a group of professors was carried to see differences in their alcohol intake patterns.Risky drinking was detected in 31.3% of students. The highest prevalence of risky drinkers, and the total score of the AUDIT test was found in students in their first academic year. Students in the first academic level taking morning classes had a two-fold risk of risky drinking (OR=1.9 (IC 95%1.1-3.1)) compared with students in the fifth level. The frequency of alcohol consumption increases with the academic level, whereas the number of alcohol beverages per drinking occasion falls.Risky drinking is high during the first year of university. As alcohol consumption might decrease with age, it is important to design preventive strategies that will strengthen this tendency.",
"title": ""
},
{
"docid": "b753eb752d4f87dbff82d77e8417f389",
"text": "Our research team has spent the last few years studying the cognitive processes involved in simultaneous interpreting. The results of this research have shown that professional interpreters develop specific ways of using their working memory, due to their work in simultaneous interpreting; this allows them to perform the processes of linguistic input, lexical and semantic access, reformulation and production of the segment translated both simultaneously and under temporal pressure (Bajo, Padilla & Padilla, 1998). This research led to our interest in the processes involved in the tasks of mediation in general. We understand that linguistic and cultural mediation involves not only translation but also the different forms of interpreting: consecutive and simultaneous. Our general objective in this project is to outline a cognitive theory of translation and interpreting and find empirical support for it. From the field of translation and interpreting there have been some attempts to create global and partial theories of the processes of mediation (Gerver, 1976; Moser-Mercer, 1997; Gile, 1997), but most of these attempts lack empirical support. On the other hand, from the field of psycholinguistics there have been some attempts to make an empirical study of the tasks of translation (De Groot, 1993; Sánchez-Casas Davis and GarcíaAlbea, 1992) and interpreting (McDonald and Carpenter, 1981), but these have always been partial, concentrating on very specific aspects of translation and interpreting. The specific objectives of this project are:",
"title": ""
},
{
"docid": "fa0f02cde08a3cee4b691788815cb757",
"text": "Control strategies for these contaminants will require a better understanding of how they move around the globe.",
"title": ""
},
{
"docid": "39803815c3edfaa2327327efaef80804",
"text": "Spatial pyramid matching (SPM) based pooling has been the dominant choice for state-of-art image classification systems. In contrast, we propose a novel object-centric spatial pooling (OCP) approach, following the intuition that knowing the location of the object of interest can be useful for image classification. OCP consists of two steps: (1) inferring the location of the objects, and (2) using the location information to pool foreground and background features separately to form the image-level representation. Step (1) is particularly challenging in a typical classification setting where precise object location annotations are not available during training. To address this challenge, we propose a framework that learns object detectors using only image-level class labels, or so-called weak labels. We validate our approach on the challenging PASCAL07 dataset. Our learned detectors are comparable in accuracy with stateof-the-art weakly supervised detection methods. More importantly, the resulting OCP approach significantly outperforms SPM-based pooling in image classification.",
"title": ""
},
{
"docid": "0f1a36a4551dc9c6b4ae127c34ff7330",
"text": "Internet of Things (IoT) is reshaping our daily lives by bridging the gaps between physical and digital world. To enable ubiquitous sensing, seamless connection and real-time processing for IoT applications, fog computing is considered as a key component in a heterogeneous IoT architecture, which deploys storage and computing resources to network edges. However, the fog-based IoT architecture can lead to various security and privacy risks, such as compromised fog nodes that may impede developments of IoT by attacking the data collection and gathering period. In this paper, we propose a novel privacy-preserving and reliable scheme for the fog-based IoT to address the data privacy and reliability challenges of the selective data aggregation service. Specifically, homomorphic proxy re-encryption and proxy re-authenticator techniques are respectively utilized to deal with the data privacy and reliability issues of the service, which supports data aggregation over selective data types for any type-driven applications. We define a new threat model to formalize the non-collusive and collusive attacks of compromised fog nodes, and it is demonstrated that the proposed scheme can prevent both non-collusive and collusive attacks in our model. In addition, performance evaluations show the efficiency of the scheme in terms of computational costs and communication overheads.",
"title": ""
},
{
"docid": "1a9e75efcc710b3bc8c5d450d29eea7c",
"text": "This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It is also shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network.",
"title": ""
},
{
"docid": "912c92dd4755cfb280f948bd4264ded7",
"text": "A decision is a commitment to a proposition or plan of action based on information and values associated with the possible outcomes. The process operates in a flexible timeframe that is free from the immediacy of evidence acquisition and the real time demands of action itself. Thus, it involves deliberation, planning, and strategizing. This Perspective focuses on perceptual decision making in nonhuman primates and the discovery of neural mechanisms that support accuracy, speed, and confidence in a decision. We suggest that these mechanisms expose principles of cognitive function in general, and we speculate about the challenges and directions before the field.",
"title": ""
},
{
"docid": "a671c6eff981b5e3a0466e53f22c4521",
"text": "This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.",
"title": ""
},
{
"docid": "f8d50c7fe96fdf8fbe06332ab7e1a2a6",
"text": "There is a strong need for advanced control methods in battery management systems, especially in the plug-in hybrid and electric vehicles sector, due to cost and safety issues of new high-power battery packs and high-energy cell design. Limitations in computational speed and available memory require the use of very simple battery models and basic control algorithms, which in turn result in suboptimal utilization of the battery. This work investigates the possible use of optimal control strategies for charging. We focus on the minimum time charging problem, where different constraints on internal battery states are considered. Based on features of the open-loop optimal charging solution, we propose a simple one-step predictive controller, which is shown to recover the time-optimal solution, while being feasible for real-time computations. We present simulation results suggesting a decrease in charging time by 50% compared to the conventional constant-current / constant-voltage method for lithium-ion batteries.",
"title": ""
},
{
"docid": "e05270c1d2abeda1cee99f1097c1c5d5",
"text": "E-transactions have become promising and very much convenient due to worldwide and usage of the internet. The consumer reviews are increasing rapidly in number on various products. These large numbers of reviews are beneficial to manufacturers and consumers alike. It is a big task for a potential consumer to read all reviews to make a good decision of purchasing. It is beneficial to mine available consumer reviews for popular products from various product review sites of consumer. The first step is performing sentiment analysis to decide the polarity of a review. On the basis of polarity, we can then classify the review. Comparison is made among the different WEKA classifiers in the form of charts and graphs.",
"title": ""
},
{
"docid": "dbf683e908ea9e5962d0830e6b8d24fd",
"text": "This paper studies physical layer security in a wireless ad hoc network with numerous legitimate transmitter–receiver pairs and eavesdroppers. A hybrid full-duplex (FD)/half-duplex receiver deployment strategy is proposed to secure legitimate transmissions, by letting a fraction of legitimate receivers work in the FD mode sending jamming signals to confuse eavesdroppers upon their information receptions, and letting the other receivers work in the half-duplex mode just receiving their desired signals. The objective of this paper is to choose properly the fraction of FD receivers for achieving the optimal network security performance. Both accurate expressions and tractable approximations for the connection outage probability and the secrecy outage probability of an arbitrary legitimate link are derived, based on which the area secure link number, network-wide secrecy throughput, and network-wide secrecy energy efficiency are optimized, respectively. Various insights into the optimal fraction are further developed, and its closed-form expressions are also derived under perfect self-interference cancellation or in a dense network. It is concluded that the fraction of FD receivers triggers a non-trivial tradeoff between reliability and secrecy, and the proposed strategy can significantly enhance the network security performance.",
"title": ""
}
] | scidocsrr |
3cff79c9c9419de7a4a231917714c1e5 | Design of Secure and Lightweight Authentication Protocol for Wearable Devices Environment | [
{
"docid": "a85d07ae3f19a0752f724b39df5eca2b",
"text": "Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an “ideal” scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.",
"title": ""
}
] | [
{
"docid": "f478bbf48161da50017d3ec9f8e677b4",
"text": "Between November 1998 and December 1999, trained medical record abstractors visited the Micronesian jurisdictions of Chuuk, Kosrae, Pohnpei, and Yap (the four states of the Federated States of Micronesia), as well as the Republic of Palau (Belau), the Republic of Kiribati, the Republic of the Marshall Islands (RMI), and the Republic of Nauru to review all available medical records in order to describe the epidemiology of cancer in Micronesia. Annualized age-adjusted, site-specific cancer period prevalence rates for individual jurisdictions were calculated. Site-specific cancer occurrence in Micronesia follows a pattern characteristic of developing nations. At the same time, cancers associated with developed countries are also impacting these populations. Recommended are jurisdiction-specific plans that outline the steps and resources needed to establish or improve local cancer registries; expand cancer awareness and screening activities; and improve diagnostic and treatment capacity.",
"title": ""
},
{
"docid": "62a51c43d4972d41d3b6cdfa23f07bb9",
"text": "To meet the development of Internet of Things (IoT), IETF has proposed IPv6 standards working under stringent low-power and low-cost constraints. However, the behavior and performance of the proposed standards have not been fully understood, especially the RPL routing protocol lying at the heart the protocol stack. In this work, we make an in-depth study on a popular implementation of the RPL (routing protocol for low power and lossy network) to provide insights and guidelines for the adoption of these standards. Specifically, we use the Contiki operating system and COOJA simulator to evaluate the behavior of the ContikiRPL implementation. We analyze the performance for different networking settings. Different from previous studies, our work is the first effort spanning across the whole life cycle of wireless sensor networks, including both the network construction process and the functioning stage. The metrics evaluated include signaling overhead, latency, energy consumption and so on, which are vital to the overall performance of a wireless sensor network. Furthermore, based on our observations, we provide a few suggestions for RPL implemented WSN. This study can also serve as a basis for future enhancement on the proposed standards.",
"title": ""
},
{
"docid": "6d97cbe726eca4b883cf7c8c2d939f8b",
"text": "In this paper, a new ensemble forecasting model for short-term load forecasting (STLF) is proposed based on extreme learning machine (ELM). Four important improvements are used to support the ELM for increased forecasting performance. First, a novel wavelet-based ensemble scheme is carried out to generate the individual ELM-based forecasters. Second, a hybrid learning algorithm blending ELM and the Levenberg-Marquardt method is proposed to improve the learning accuracy of neural networks. Third, a feature selection method based on the conditional mutual information is developed to select a compact set of input variables for the forecasting model. Fourth, to realize an accurate ensemble forecast, partial least squares regression is utilized as a combining approach to aggregate the individual forecasts. Numerical testing shows that proposed method can obtain better forecasting results in comparison with other standard and state-of-the-art methods.",
"title": ""
},
{
"docid": "cbb6bac245862ed0265f6d32e182df92",
"text": "With the explosion of online communication and publication, texts become obtainable via forums, chat messages, blogs, book reviews and movie reviews. Usually, these texts are much short and noisy without sufficient statistical signals and enough information for a good semantic analysis. Traditional natural language processing methods such as Bow-of-Word (BOW) based probabilistic latent semantic models fail to achieve high performance due to the short text environment. Recent researches have focused on the correlations between words, i.e., term dependencies, which could be helpful for mining latent semantics hidden in short texts and help people to understand them. Long short-term memory (LSTM) network can capture term dependencies and is able to remember the information for long periods of time. LSTM has been widely used and has obtained promising results in variants of problems of understanding latent semantics of texts. At the same time, by analyzing the texts, we find that a number of keywords contribute greatly to the semantics of the texts. In this paper, we establish a keyword vocabulary and propose an LSTM-based model that is sensitive to the words in the vocabulary; hence, the keywords leverage the semantics of the full document. The proposed model is evaluated in a short-text sentiment analysis task on two datasets: IMDB and SemEval-2016, respectively. Experimental results demonstrate that our model outperforms the baseline LSTM by 1%~2% in terms of accuracy and is effective with significant performance enhancement over several non-recurrent neural network latent semantic models (especially in dealing with short texts). We also incorporate the idea into a variant of LSTM named the gated recurrent unit (GRU) model and achieve good performance, which proves that our method is general enough to improve different deep learning models.",
"title": ""
},
{
"docid": "bf4776d6d01d63d3eb6dbeba693bf3de",
"text": "As the development of microprocessors, power electronic converters and electric motor drives, electric power steering (EPS) system which uses an electric motor came to use a few year ago. Electric power steering systems have many advantages over traditional hydraulic power steering systems in engine efficiency, space efficiency, and environmental compatibility. This paper deals with design and optimization of an interior permanent magnet (IPM) motor for power steering application. Simulated Annealing method is used for optimization. After optimization and finding motor parameters, An IPM motor and drive with mechanical parts of EPS system is simulated and performance evaluation of system is done.",
"title": ""
},
{
"docid": "71b0dbd905c2a9f4111dfc097bfa6c67",
"text": "In this paper, the authors undertake a study of cyber warfare reviewing theories, law, policies, actual incidents and the dilemma of anonymity. Starting with the United Kingdom perspective on cyber warfare, the authors then consider United States' views including the perspective of its military on the law of war and its general inapplicability to cyber conflict. Consideration is then given to the work of the United Nations' group of cyber security specialists and diplomats who as of July 2010 have agreed upon a set of recommendations to the United Nations Secretary General for negotiations on an international computer security treaty. An examination of the use of a nation's cybercrime law to prosecute violations that occur over the Internet indicates the inherent limits caused by the jurisdictional limits of domestic law to address cross-border cybercrime scenarios. Actual incidents from Estonia (2007), Georgia (2008), Republic of Korea (2009), Japan (2010), ongoing attacks on the United States as well as other incidents and reports on ongoing attacks are considered as well. Despite the increasing sophistication of such cyber attacks, it is evident that these attacks were met with a limited use of law and policy to combat them that can be only be characterised as a response posture defined by restraint. Recommendations are then examined for overcoming the attribution problem. The paper then considers when do cyber attacks rise to the level of an act of war by reference to the work of scholars such as Schmitt and Wingfield. Further evaluation of the special impact that non-state actors may have and some theories on how to deal with the problem of asymmetric players are considered. Discussion and possible solutions are offered. A conclusion is offered drawing some guidance from the writings of the Chinese philosopher Sun Tzu. Finally, an appendix providing a technical overview of the problem of attribution and the dilemma of anonymity in cyberspace is provided. 1. The United Kingdom Perspective \"If I went and bombed a power station in France, that would be an act of war. If I went on to the net and took out a power station, is that an act of war? One",
"title": ""
},
{
"docid": "a5d100fd83620d9cc868a33ab6367be2",
"text": "Identifying the lineage path of neural cells is critical for understanding the development of brain. Accurate neural cell detection is a crucial step to obtain reliable delineation of cell lineage. To solve this task, in this paper we present an efficient neural cell detection method based on SSD (single shot multibox detector) neural network model. Our method adapts the original SSD architecture and removes the unnecessary blocks, leading to a light-weight model. Moreover, we formulate the cell detection as a binary regression problem, which makes our model much simpler. Experimental results demonstrate that, with only a small training set, our method is able to accurately capture the neural cells under severe shape deformation in a fast way.",
"title": ""
},
{
"docid": "2a8f464e709dcae4e34f73654aefe31f",
"text": "LTE 4G cellular networks are gradually being adopted by all major operators in the world and are expected to rule the cellular landscape at least for the current decade. They will also form the starting point for further progress beyond the current generation of mobile cellular networks to chalk a path towards fifth generation mobile networks. The lack of open cellular ecosystem has limited applied research in this field within the boundaries of vendor and operator R&D groups. Furthermore, several new approaches and technologies are being considered as potential elements making up such a future mobile network, including cloudification of radio network, radio network programability and APIs following SDN principles, native support of machine-type communication, and massive MIMO. Research on these technologies requires realistic and flexible experimentation platforms that offer a wide range of experimentation modes from real-world experimentation to controlled and scalable evaluations while at the same time retaining backward compatibility with current generation systems.\n In this work, we present OpenAirInterface (OAI) as a suitably flexible platform towards open LTE ecosystem and playground [1]. We will demonstrate an example of the use of OAI to deploy a low-cost open LTE network using commodity hardware with standard LTE-compatible devices. We also show the reconfigurability features of the platform.",
"title": ""
},
{
"docid": "6513c4ca4197e9ff7028e527a621df0a",
"text": "The development of complex distributed systems demands for the creation of suitable architectural styles (or paradigms) and related run-time infrastructures. An emerging style that is receiving increasing attention is based on the notion of event. In an event-based architecture, distributed software components interact by generating and consuming events. An event is the occurrence of some state change in a component of a software system, made visible to the external world. The occurrence of an event in a component is asynchronously notified to any other component that has declared some interest in it. This paradigm (usually called “publish/subscribe” from the names of the two basic operations that regulate the communication) holds the promise of supporting a flexible and effective interaction among highly reconfigurable, distributed software components. In the past two years, we have developed an object-oriented infrastructure called JEDI (Java Event-based Distributed Infrastructure). JEDI supports the development and operation of event-based systems and has been used to implement a significant example of distributed system, namely, the OPSS workflow management system (WFMS). The paper illustrates JEDI main features and how we have used them to implement OPSS. Moreover, the paper provides an initial evaluation of our experiences in using the event-based architectural style and a classification of some of the event-based infrastructures presented in the literature.",
"title": ""
},
{
"docid": "4243f0bafe669ab862aaad2b184c6a0e",
"text": "Generating adversarial examples is an intriguing problem and an important way of understanding the working mechanism of deep neural networks. Most existing approaches generated perturbations in the image space, i.e., each pixel can be modified independently. However, in this paper we pay special attention to the subset of adversarial examples that are physically authentic – those corresponding to actual changes in 3D physical properties (like surface normals, illumination condition, etc.). These adversaries arguably pose a more serious concern, as they demonstrate the possibility of causing neural network failure by small perturbations of real-world 3D objects and scenes. In the contexts of object classification and visual question answering, we augment state-of-the-art deep neural networks that receive 2D input images with a rendering module (either differentiable or not) in front, so that a 3D scene (in the physical space) is rendered into a 2D image (in the image space), and then mapped to a prediction (in the output space). The adversarial perturbations can now go beyond the image space, and have clear meanings in the 3D physical world. Through extensive experiments, we found that a vast majority of image-space adversaries cannot be explained by adjusting parameters in the physical space, i.e., they are usually physically inauthentic. But it is still possible to successfully attack beyond the image space on the physical space (such that authenticity is enforced), though this is more difficult than image-space attacks, reflected in lower success rates and heavier perturbations required.",
"title": ""
},
{
"docid": "6737955fd1876a40fc0e662a4cac0711",
"text": "Cloud computing is a novel perspective for large scale distributed computing and parallel processing. It provides computing as a utility service on a pay per use basis. The performance and efficiency of cloud computing services always depends upon the performance of the user tasks submitted to the cloud system. Scheduling of the user tasks plays significant role in improving performance of the cloud services. Task scheduling is one of the main types of scheduling performed. This paper presents a detailed study of various task scheduling methods existing for the cloud environment. A brief analysis of various scheduling parameters considered in these methods is also discussed in this paper.",
"title": ""
},
{
"docid": "289942ca889ccea58d5b01dab5c82719",
"text": "Concepts of basal ganglia organization have changed markedly over the past decade, due to significant advances in our understanding of the anatomy, physiology and pharmacology of these structures. Independent evidence from each of these fields has reinforced a growing perception that the functional architecture of the basal ganglia is essentially parallel in nature, regardless of the perspective from which these structures are viewed. This represents a significant departure from earlier concepts of basal ganglia organization, which generally emphasized the serial aspects of their connectivity. Current evidence suggests that the basal ganglia are organized into several structurally and functionally distinct 'circuits' that link cortex, basal ganglia and thalamus, with each circuit focused on a different portion of the frontal lobe. In this review, Garrett Alexander and Michael Crutcher, using the basal ganglia 'motor' circuit as the principal example, discuss recent evidence indicating that a parallel functional architecture may also be characteristic of the organization within each individual circuit.",
"title": ""
},
{
"docid": "45009303764570cbfa3532a9d98f5393",
"text": "The Wasserstein distance and its variations, e.g., the sliced-Wasserstein (SW) distance, have recently drawn attention from the machine learning community. The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning. In this paper, we first clarify the mathematical connection between the SW distance and the Radon transform. We then utilize the generalized Radon transform to define a new family of distances for probability measures, which we call generalized slicedWasserstein (GSW) distances. We also show that, similar to the SW distance, the GSW distance can be extended to a maximum GSW (max-GSW) distance. We then provide the conditions under which GSW and max-GSW distances are indeed distances. Finally, we compare the numerical performance of the proposed distances on several generative modeling tasks, including SW flows and SW auto-encoders.",
"title": ""
},
{
"docid": "0e7da1ef24306eea2e8f1193301458fe",
"text": "We consider the problem of object figure-ground segmentation when the object categories are not available during training (i.e. zero-shot). During training, we learn standard segmentation models for a handful of object categories (called “source objects”) using existing semantic segmentation datasets. During testing, we are given images of objects (called “target objects”) that are unseen during training. Our goal is to segment the target objects from the background. Our method learns to transfer the knowledge from the source objects to the target objects. Our experimental results demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "e830098f9c045d376177e6d2644d4a06",
"text": "OBJECTIVE\nTo determine whether acetyl-L-carnitine (ALC), a metabolite necessary for energy metabolism and essential fatty acid anabolism, might help attention-deficit/hyperactivity disorder (ADHD). Trials in Down's syndrome, migraine, and Alzheimer's disease showed benefit for attention. A preliminary trial in ADHD using L-carnitine reported significant benefit.\n\n\nMETHOD\nA multi-site 16-week pilot study randomized 112 children (83 boys, 29 girls) age 5-12 with systematically diagnosed ADHD to placebo or ALC in weight-based doses from 500 to 1500 mg b.i.d. The 2001 revisions of the Conners' parent and teacher scales (including DSM-IV ADHD symptoms) were administered at baseline, 8, 12, and 16 weeks. Analyses were ANOVA of change from baseline to 16 weeks with treatment, center, and treatment-by-center interaction as independent variables.\n\n\nRESULTS\nThe primary intent-to-treat analysis, of 9 DSM-IV teacher-rated inattentive symptoms, was not significant. However, secondary analyses were interesting. There was significant (p = 0.02) moderation by subtype: superiority of ALC over placebo in the inattentive type, with an opposite tendency in combined type. There was also a geographic effect (p = 0.047). Side effects were negligible; electrocardiograms, lab work, and physical exam unremarkable.\n\n\nCONCLUSION\nALC appears safe, but with no effect on the overall ADHD population (especially combined type). It deserves further exploration for possible benefit specifically in the inattentive type.",
"title": ""
},
{
"docid": "cae9e77074db114690a6ed1330d9b14c",
"text": "BACKGROUND\nOn December 8th, 2015, World Health Organization published a priority list of eight pathogens expected to cause severe outbreaks in the near future. To better understand global research trends and characteristics of publications on these emerging pathogens, we carried out this bibliometric study hoping to contribute to global awareness and preparedness toward this topic.\n\n\nMETHOD\nScopus database was searched for the following pathogens/infectious diseases: Ebola, Marburg, Lassa, Rift valley, Crimean-Congo, Nipah, Middle Eastern Respiratory Syndrome (MERS), and Severe Respiratory Acute Syndrome (SARS). Retrieved articles were analyzed to obtain standard bibliometric indicators.\n\n\nRESULTS\nA total of 8619 journal articles were retrieved. Authors from 154 different countries contributed to publishing these articles. Two peaks of publications, an early one for SARS and a late one for Ebola, were observed. Retrieved articles received a total of 221,606 citations with a mean ± standard deviation of 25.7 ± 65.4 citations per article and an h-index of 173. International collaboration was as high as 86.9%. The Centers for Disease Control and Prevention had the highest share (344; 5.0%) followed by the University of Hong Kong with 305 (4.5%). The top leading journal was Journal of Virology with 572 (6.6%) articles while Feldmann, Heinz R. was the most productive researcher with 197 (2.3%) articles. China ranked first on SARS, Turkey ranked first on Crimean-Congo fever, while the United States of America ranked first on the remaining six diseases. Of retrieved articles, 472 (5.5%) were on vaccine - related research with Ebola vaccine being most studied.\n\n\nCONCLUSION\nNumber of publications on studied pathogens showed sudden dramatic rise in the past two decades representing severe global outbreaks. Contribution of a large number of different countries and the relatively high h-index are indicative of how international collaboration can create common health agenda among distant different countries.",
"title": ""
},
{
"docid": "180a840a22191da6e9a99af3d41ab288",
"text": "The hippocampal CA3 region is classically viewed as a homogeneous autoassociative network critical for associative memory and pattern completion. However, recent evidence has demonstrated a striking heterogeneity along the transverse, or proximodistal, axis of CA3 in spatial encoding and memory. Here we report the presence of striking proximodistal gradients in intrinsic membrane properties and synaptic connectivity for dorsal CA3. A decreasing gradient of mossy fiber synaptic strength along the proximodistal axis is mirrored by an increasing gradient of direct synaptic excitation from entorhinal cortex. Furthermore, we uncovered a nonuniform pattern of reactivation of fear memory traces, with the most robust reactivation during memory retrieval occurring in mid-CA3 (CA3b), the region showing the strongest net recurrent excitation. Our results suggest that heterogeneity in both intrinsic properties and synaptic connectivity may contribute to the distinct spatial encoding and behavioral role of CA3 subregions along the proximodistal axis.",
"title": ""
},
{
"docid": "6a82dfa1d79016388c38ccba77c56ae5",
"text": "Scripts define knowledge about how everyday scenarios (such as going to a restaurant) are expected to unfold. One of the challenges to learning scripts is the hierarchical nature of the knowledge. For example, a suspect arrested might plead innocent or guilty, and a very different track of events is then expected to happen. To capture this type of information, we propose an autoencoder model with a latent space defined by a hierarchy of categorical variables. We utilize a recently proposed vector quantization based approach, which allows continuous embeddings to be associated with each latent variable value. This permits the decoder to softly decide what portions of the latent hierarchy to condition on by attending over the value embeddings for a given setting. Our model effectively encodes and generates scripts, outperforming a recent language modeling-based method on several standard tasks, and allowing the autoencoder model to achieve substantially lower perplexity scores compared to the previous language modelingbased method.",
"title": ""
},
{
"docid": "bb799a3aac27f4ac764649e1f58ee9fb",
"text": "White grubs (larvae of Coleoptera: Scarabaeidae) are abundant in below-ground systems and can cause considerable damage to a wide variety of crops by feeding on roots. White grub populations may be controlled by natural enemies, but the predator guild of the European species is barely known. Trophic interactions within soil food webs are difficult to study with conventional methods. Therefore, a polymerase chain reaction (PCR)-based approach was developed to investigate, for the first time, a soil insect predator-prey system. Can, however, highly sensitive detection methods identify carrion prey in predators, as has been shown for fresh prey? Fresh Melolontha melolontha (L.) larvae and 1- to 9-day-old carcasses were presented to Poecilus versicolor Sturm larvae. Mitochondrial cytochrome oxidase subunit I fragments of the prey, 175, 327 and 387 bp long, were detectable in 50% of the predators 32 h after feeding. Detectability decreased to 18% when a 585 bp sequence was amplified. Meal size and digestion capacity of individual predators had no influence on prey detection. Although prey consumption was negatively correlated with cadaver age, carrion prey could be detected by PCR as efficiently as fresh prey irrespective of carrion age. This is the first proof that PCR-based techniques are highly efficient and sensitive, both in fresh and carrion prey detection. Thus, if active predation has to be distinguished from scavenging, then additional approaches are needed to interpret the picture of prey choice derived by highly sensitive detection methods.",
"title": ""
},
{
"docid": "97adb3a003347f579706cd01a762bdc9",
"text": "The Universal Serial Bus (USB) is an extremely popular interface standard for computer peripheral connections and is widely used in consumer Mass Storage Devices (MSDs). While current consumer USB MSDs provide relatively high transmission speed and are convenient to carry, the use of USB MSDs has been prohibited in many commercial and everyday environments primarily due to security concerns. Security protocols have been previously proposed and a recent approach for the USB MSDs is to utilize multi-factor authentication. This paper proposes significant enhancements to the three-factor control protocol that now makes it secure under many types of attacks including the password guessing attack, the denial-of-service attack, and the replay attack. The proposed solution is presented with a rigorous security analysis and practical computational cost analysis to demonstrate the usefulness of this new security protocol for consumer USB MSDs.",
"title": ""
}
] | scidocsrr |
0c56ff755afba097645800990f749c55 | Design of a Wideband Planar Printed Quasi-Yagi Antenna Using Stepped Connection Structure | [
{
"docid": "6661cc34d65bae4b09d7c236d0f5400a",
"text": "In this letter, we present a novel coplanar waveguide fed quasi-Yagi antenna with broad bandwidth. The uniqueness of this design is due to its simple feed selection and despite this, its achievable bandwidth. The 10 dB return loss bandwidth of the antenna is 44% covering X-band. The antenna is realized on a high dielectric constant substrate and is compatible with microstrip circuitry and active devices. The gain of the antenna is 7.4 dBi, the front-to-back ratio is 15 dB and the nominal efficiency of the radiator is 95%.",
"title": ""
},
{
"docid": "5f40ac6afd39e3d2fcbc5341bc3af7b4",
"text": "We present a modified quasi-Yagi antenna for use in WLAN access points. The antenna uses a new microstrip-to-coplanar strip (CPS) transition, consisting of a tapered microstrip input, T-junction, conventional 50-ohm microstrip line, and three artificial transmission line (ATL) sections. The design concept, mode conversion scheme, and simulated and experimental S-parameters of the transition are discussed first. It features a compact size, and a 3dB-insertion loss bandwidth of 78.6%. Based on the transition, a modified quasi-Yagi antenna is demonstrated. In addition to the new transition, the antenna consists of a CPS feed line, a meandered dipole, and a parasitic element. The meandered dipole can substantially increase to the front-to-back ratio of the antenna without sacrificing the operating bandwidth. The parasitic element is placed in close proximity to the driven element to improve impedance bandwidth and radiation characteristics. The antenna exhibits excellent end-fire radiation with a front-to-back ratio of greater than 15 dB. It features a moderate gain around 4 dBi, and a fractional bandwidth of 38.3%. We carefully investigate the concept, methodology, and experimental results of the proposed antenna.",
"title": ""
}
] | [
{
"docid": "d84c8302578391c909b2ac261c93c1fb",
"text": "This short communication describes a case of diprosopiasis in Trachemys scripta scripta imported from Florida (USA) and farmed for about 4 months by a private owner in Palermo, Sicily, Italy. The water turtle showed the morphological and radiological features characterizing such deformity. This communication aims to advance the knowledge of the reptile's congenital anomalies and suggests the need for more detailed investigations to better understand its pathogenesis.",
"title": ""
},
{
"docid": "b04ba2e942121b7a32451f0b0f690553",
"text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381",
"title": ""
},
{
"docid": "19bb054fb4c6398df99a84a382354d59",
"text": "Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We take the principled view of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbation of the underlying data distribution in a Wasserstein ball, we provide a training procedure that augments model parameter updates with worst-case perturbations of training data. For smooth losses, our procedure provably achieves moderate levels of robustness with little computational or statistical cost relative to empirical risk minimization. Furthermore, our statistical guarantees allow us to efficiently certify robustness for the population loss. We match or outperform heuristic approaches on supervised and reinforcement learning tasks.",
"title": ""
},
{
"docid": "48c28572e5eafda1598a422fa1256569",
"text": "Future power networks will be characterized by safe and reliable functionality against physical and cyber attacks. This paper proposes a unified framework and advanced monitoring procedures to detect and identify network components malfunction or measurements corruption caused by an omniscient adversary. We model a power system under cyber-physical attack as a linear time-invariant descriptor system with unknown inputs. Our attack model generalizes the prototypical stealth, (dynamic) false-data injection and replay attacks. We characterize the fundamental limitations of both static and dynamic procedures for attack detection and identification. Additionally, we design provably-correct (dynamic) detection and identification procedures based on tools from geometric control theory. Finally, we illustrate the effectiveness of our method through a comparison with existing (static) detection algorithms, and through a numerical study.",
"title": ""
},
{
"docid": "403d54a5672037cb8adb503405845bbd",
"text": "This paper introduces adaptor grammars, a class of probabil istic models of language that generalize probabilistic context-free grammar s (PCFGs). Adaptor grammars augment the probabilistic rules of PCFGs with “ada ptors” that can induce dependencies among successive uses. With a particular choice of adaptor, based on the Pitman-Yor process, nonparametric Bayesian mo dels f language using Dirichlet processes and hierarchical Dirichlet proc esses can be written as simple grammars. We present a general-purpose inference al gorithm for adaptor grammars, making it easy to define and use such models, and ill ustrate how several existing nonparametric Bayesian models can be expressed wi thin this framework.",
"title": ""
},
{
"docid": "f5d8c506c9f25bff429cea1ed4c84089",
"text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.",
"title": ""
},
{
"docid": "4249c95fcd869434312524f05c013c55",
"text": "The demands on visual recognition systems do not end with the complexity offered by current large-scale image datasets, such as ImageNet. In consequence, we need curious and continuously learning algorithms that actively acquire knowledge about semantic concepts which are present in available unlabeled data. As a step towards this goal, we show how to perform continuous active learning and exploration, where an algorithm actively selects relevant batches of unlabeled examples for annotation. These examples could either belong to already known or to yet undiscovered classes. Our algorithm is based on a new generalization of the Expected Model Output Change principle for deep architectures and is especially tailored to deep neural networks. Furthermore, we show easy-to-implement approximations that yield efficient techniques for active selection. Empirical experiments show that our method outperforms currently used heuristics.",
"title": ""
},
{
"docid": "e95fa624bb3fd7ea45650213088a43b0",
"text": "In recent years, much research has been conducted on image super-resolution (SR). To the best of our knowledge, however, few SR methods were concerned with compressed images. The SR of compressed images is a challenging task due to the complicated compression artifacts, while many images suffer from them in practice. The intuitive solution for this difficult task is to decouple it into two sequential but independent subproblems, i.e., compression artifacts reduction (CAR) and SR. Nevertheless, some useful details may be removed in CAR stage, which is contrary to the goal of SR and makes the SR stage more challenging. In this paper, an end-to-end trainable deep convolutional neural network is designed to perform SR on compressed images (CISRDCNN), which reduces compression artifacts and improves image resolution jointly. Experiments on compressed images produced by JPEG (we take the JPEG as an example in this paper) demonstrate that the proposed CISRDCNN yields state-of-the-art SR performance on commonly used test images and imagesets. The results of CISRDCNN on real low quality web images are also very impressive, with obvious quality enhancement. Further, we explore the application of the proposed SR method in low bit-rate image coding, leading to better rate-distortion performance than JPEG.",
"title": ""
},
{
"docid": "33817271f39357c4aef254ac96aab480",
"text": "Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).",
"title": ""
},
{
"docid": "7db989219c3c15aa90a86df84b134473",
"text": "INTRODUCTION\nResearch indicated that: (i) vaginal orgasm (induced by penile-vaginal intercourse [PVI] without concurrent clitoral masturbation) consistency (vaginal orgasm consistency [VOC]; percentage of PVI occasions resulting in vaginal orgasm) is associated with mental attention to vaginal sensations during PVI, preference for a longer penis, and indices of psychological and physiological functioning, and (ii) clitoral, distal vaginal, and deep vaginal/cervical stimulation project via different peripheral nerves to different brain regions.\n\n\nAIMS\nThe aim of this study is to examine the association of VOC with: (i) sexual arousability perceived from deep vaginal stimulation (compared with middle and shallow vaginal stimulation and clitoral stimulation), and (ii) whether vaginal stimulation was present during the woman's first masturbation.\n\n\nMETHODS\nA sample of 75 Czech women (aged 18-36), provided details of recent VOC, site of genital stimulation during first masturbation, and their recent sexual arousability from the four genital sites.\n\n\nMAIN OUTCOME MEASURES\nThe association of VOC with: (i) sexual arousability perceived from the four genital sites and (ii) involvement of vaginal stimulation in first-ever masturbation.\n\n\nRESULTS\nVOC was associated with greater sexual arousability from deep vaginal stimulation but not with sexual arousability from other genital sites. VOC was also associated with women's first masturbation incorporating (or being exclusively) vaginal stimulation.\n\n\nCONCLUSIONS\nThe findings suggest (i) stimulating the vagina during early life masturbation might indicate individual readiness for developing greater vaginal responsiveness, leading to adult greater VOC, and (ii) current sensitivity of deep vaginal and cervical regions is associated with VOC, which might be due to some combination of different neurophysiological projections of the deep regions and their greater responsiveness to penile stimulation.",
"title": ""
},
{
"docid": "28a4fd94ba02c70d6781ae38bf35ca5a",
"text": "Zero-shot learning (ZSL) highly depends on a good semantic embedding to connect the seen and unseen classes. Recently, distributed word embeddings (DWE) pre-trained from large text corpus have become a popular choice to draw such a connection. Compared with human defined attributes, DWEs are more scalable and easier to obtain. However, they are designed to reflect semantic similarity rather than visual similarity and thus using them in ZSL often leads to inferior performance. To overcome this visual-semantic discrepancy, this work proposes an objective function to re-align the distributed word embeddings with visual information by learning a neural network to map it into a new representation called visually aligned word embedding (VAWE). Thus the neighbourhood structure of VAWEs becomes similar to that in the visual domain. Note that in this work we do not design a ZSL method that projects the visual features and semantic embeddings onto a shared space but just impose a requirement on the structure of the mapped word embeddings. This strategy allows the learned VAWE to generalize to various ZSL methods and visual features. As evaluated via four state-of-the-art ZSL methods on four benchmark datasets, the VAWE exhibit consistent performance improvement.",
"title": ""
},
{
"docid": "17c12cc27cd66d0289fe3baa9ab4124d",
"text": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI.",
"title": ""
},
{
"docid": "59209ea750988390be9b0d0207ec06bd",
"text": "In diesem Kapitel wird Kognitive Modellierung als ein interdisziplinäres Forschungsgebiet vorgestellt, das sich mit der Entwicklung von computerimplementierbaren Modellen beschäftigt, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Nach einem allgemeinen Überblick über Zielsetzungen, Methoden und Vorgehensweisen, die sich auf den Gebieten der kognitiven Psychologie und der Künstlichen Intelligenz entwickelt haben, sowie der Darstellung eines Theorierahmens werden vier Modelle detaillierter besprochen: In einem I>crnmodcll, das in einem Intelligenten Tutoriellen System Anwendung findet und in einem Performanz-Modell der MenschComputer-Interaktion wird menschliches Handlungswissen beschrieben. Die beiden anderen Modelle zum Textverstehen und zur flexiblen Gedächtnisorganisation beziehen sich demgegenüber vor allem auf den Aufbau und Abruf deklarativen Wissens. Abschließend werden die vorgestellten Modelle in die historische Entwicklung eingeordnet. Möglichkeiten und Grenzen der Kognitiven Modellierung werden hinsichtlich interessant erscheinender Weiterentwicklungen diskutiert. 1. Einleitung und Überblick Das Gebiet der Künstlichen Intelligenz wird meist unter Bezugnahme auf ursprünglich nur beim Menschen beobachtetes Verhalten definiert. So wird die Künstliche Intelligenz oder KI als die Erforschung von jenen Verhaltensabläufen verstanden, deren Planung und Durchführung Intelligenz erfordert. Der Begriff Intelligenz wird dabei unter Bezugnahme auf den Menschen vage abgegrenzt |Siekmann_83,Winston_84]. Da auch Teilbereiche der Psychologie, vor allem die Kognitive Psychologie, Intelligenz und Denken untersuchen, könnte man vermuten, daß die KI-Forschung als die jüngere Wissenschaft direkt auf älteren psychologischen Erkenntnissen aufbauen würde. Obwohl K I und kognitive Psychologie einen ähnlichen Gegenstandsbereich erforschen, gibt es jedoch auch vielschichtige Unterschiede zwischen beiden Disziplinen. Daraus läßt sich möglicherweise erklären, daß die beiden Fächer bislang nicht in dem Maß interagiert haben, wie dies wünschenswert wäre. 1.1 Unterschiede zwischen KI und Kognitiver Psychologie Auch wenn keine klare Grenze zwischen den beiden Gebieten gezogen werden kann, so müssen wir doch feststellen, daß K I nicht gleich Kognitiver Psychologie ist. Wichtige Unterschiede bestehen in den primären Forschungszielen und Methoden, sowie in der Interpretation von Computermodellen (computational models). Zielsetzungen und Methoden Während die K I eine Modellierung von Kompetenzen anstrebt, erforscht die Psychologie die Performanz des Menschen. • Die K I sucht nach Verfahren, die zu einem intelligenten Verhalten eines Computers fuhren. Beispielsweise sollte ein Computer natürliche Sprache verstehen, neue Begriffe lernen können oder Expertenverhalten zeigen oder unterstützen. Die K I versucht also, intelligente Systeme zu entwickeln und deckt dabei mögliche Prinzipien von Intelligenz auf, indem sie Datenstrukturen und Algorithmen spezifiziert, die intelligentes Verhalten erwarten lassen. Entscheidend ist dabei, daß eine intelligente Leistung im Sinne eines Turing-Tests erbracht wird: Eine Implementierung des Algorithmus soll für eine Menge spezifizierter Eingaben (z. B . gesprochene Sprache) innerhalb angemessener Zeit die vergleichbare Verarbeitungsleistung erbringen wie der Mensch. Der beobachtete Systemoutput von Mensch und Computer wäre also oberflächlich betrachtet nicht voneinander unterscheidbar [Turing_63]. Ob die dabei im Computer verwendeten Strukturen, Prozesse und Heuristiken denen beim Menschen ähneln, spielt in der K I keine primäre Rolle. • Die Kognitive Psychologie hingegen untersucht eher die internen kognitiven Verarbeitungsprozesse des Menschen. Bei einer psychologischen Theorie sollte also auch das im Modell verwendete Verfahren den Heuristiken entsprechen, die der Mensch verwendet. Beispielsweise wird ein Schachprogramm nicht dadurch zu einem psychologisch adäquaten Modell, daß es die Spielstärke menschlicher Meisterspieler erreicht. Vielmehr sollten bei einem psychologischen Modell auch die Verarbeitungsprozesse von Mensch und Programm übereinstimmen (vgl. dazu [deGroot_66]).Für psychologische Forschungen sind daher empirische und gezielte experimentelle Untersuchungen der menschlichen Kognition von großer Bedeutung. In der K I steht die Entwicklung und Implementierung von Modellen im Vordergrund. Die kognitive Psychologie dagegen betont die Wichtigkeit der empirischen Evaluation von Modellen zur Absicherung von präzisen, allgemeingültigen Aussagen. Wegen dieser verschiedenen Schwerpunkt Setzung und den daraus resultierenden unterschiedlichen Forschungsmethoden ist es für die Forscher der einen Disziplin oft schwierig, den wissenschaftlichen Fortschritt der jeweils anderen Disziplin zu nutzen [Miller_78]. Interpretation von Computermodellen Die K I ist aus der Informatik hervorgegangen. Wie bei der Informatik bestehen auch bei der K I wissenschaftliche Erkenntnisse darin, daß mit ingenieurwissenschaftlichen Verfahren neue Systeme wie Computerhardund -Software konzipiert und erzeugt werden. Die genaue Beschreibung eines so geschaffenen Systems ist für den Informatiker im Prinzip unproblematisch, da er das System selbst entwickelt hat und daher über dessen Bestandteile und Funktionsweisen bestens informiert ist. Darin liegt ein Unterschied zu den empirischen Wissenschaften wie der Physik oder Psychologie. Der Erfahrungswissenschaftler muß Objektbereiche untersuchen, deren Gesetzmäßigkeiten er nie mit letzter Sicherheit feststellen kann. Er m u ß sich daher Theorien oder Modelle über den Untersuchungsgegenstand bilden, die dann empirisch überprüft werden können. Jedoch läßt sich durch eine noch so große Anzahl von Experimenten niemals die Korrektheit eines Modells beweisen [Popper_66]. E in einfaches Beispiel kann diesen Unterschied verdeutlichen. • E in Hardwarespezialist, der einen Personal Computer gebaut hat, weiß, daß die Aussage \"Der Computer ist mit 640 K B Hauptspeicher bestückt\" richtig ist, weil er ihn eben genau so bestückt hat. Dies ist also eine feststehende Tatsache, die keiner weiteren Überprüfung bedarf. • Die Behauptung eines Psychologen, daß der menschliche Kurzzeitoder Arbeitsspeicher eine Kapazität von etwa 7 Einheiten oder Chunks habe, hat jedoch einen ganz anderen Stellenwert. Damit wird keinesfalls eine faktische Behauptung über die Größe von Arealen im menschlichen Gehirn aufgestellt. \"Arbeitsspeicher\" wird hier als theoretischer Term eines Modells verwendet. Mit der Aussage über die Kapazität des Arbeitsspeichers ist gemeint, daß erfahrungsgemäß Modelle, die eine solche Kapazitätsbescfiränkung annehmen, menschliches Verhalten gut beschreiben können. Dadurch wird jedoch nicht ausgeschlossen, daß ein weiteres Experiment Unzulänglichkeiten oder die Inkorrektheit des Modells nachweist. In den Erfahrungswissenscharten werden theoretische Begriffe wie etwa Arbeitsspeicher innerhalb von Computermodellen zur abstrahierten und integrativen Beschreibung von empirischen Erkenntnissen verwendet. Dadurch können beim Menschen zu beobachtende Verhaltensweisen vorhergesagt werden. Aus der Sichtweise der Informatik bezeichnen genau die gleichen Tcrme jedoch tatsächliche Komponenten eines Geräts oder Programms. Diese unterschiedlichen Sichtweisen der gleichen Modelle verbieten einen unkritischen und oberflächlichen Informationstransfer zwischen K I und Kognitiver Psychologie. Aus der Integration der Zielsetzungen und Sichtweisen ergeben sich jedoch auch gerade vielversprechende Erkenntnismöglichkeiten über Intelligenz. Da theoretische wie auch empirische Untersuchungen zum Verständnis der Intelligenz beitragen, können sich die Methoden und Erkenntnisse von beiden Disziplinen (ähnlich wie Mathematik und Physik im Bereich der theoretischen Physik) ergänzen und befruchten. 1.2 Synthese von KI und Kognitiver Psychologie Im Rahmen der Kognitionswissenschaften(cognitive science) tragen viele Disziplinen (z.B. K I , Psychologie, Linguistik, Anthropologie ...) Erkenntnisse über informationsverarbeitende Systeme bei. Die Kognitive Modellierung als ein Teilgebiet von sowohl K I als auch Kognitiver Psychologie befaßt sich mit der Entwicklung von computerimplementierbaren Modellen, in denen wesentliche Eigenschaften des Wissens und der Informationsverarbeitung beim Menschen abgebildet sind. Durch Kognitive Modellierung wird also eine Synthese von K I und psychologischer Forschung angestrebt. E in Computermodell wird zu einem kognitiven Modell, indem Entitätcn des Modells psychologischen Beobachtungen und Erkenntnissen zugeordnet werden. Da ein solches Modell auch den Anspruch erhebt, menschliches Verhalten vorherzusagen, können Kognitive Modelle aufgrund empirischer Untersuchungen weiterentwickelt werden. Die Frage, ob ein KI-Modell als ein kognitives Modell anzusehen ist, kann nicht einfach bejaht oder verneint werden, sondern wird vielmehr durch die Angabe einer Zuordnung von Aspekten der menschlichen Informationsverarbeitung zu Eigenschaften des Computermodells beantwortet.",
"title": ""
},
{
"docid": "2a36a2ab5b0e01da90859179a60cef9a",
"text": "We report 3 cases of renal toxicity associated with use of the antiviral agent tenofovir. Renal failure, proximal tubular dysfunction, and nephrogenic diabetes insipidus were observed, and, in 2 cases, renal biopsy revealed severe tubular necrosis with characteristic nuclear changes. Patients receiving tenofovir must be monitored closely for early signs of tubulopathy (glycosuria, acidosis, mild increase in the plasma creatinine level, and proteinuria).",
"title": ""
},
{
"docid": "598ffff550aa4e3a9ad1d2f5251fc03a",
"text": "The now taken-for-granted notion that data lead to information, which leads to knowledge, which in turn leads to wisdom was first specified in detail by R. L. Ackoff in 1988. The Data-Information-KnowledgeWisdom hierarchy is based on filtration, reduction, and transformation. Besides being causal and hierarchical, the scheme is pyramidal, in that data are plentiful while wisdom is almost nonexistent. Ackoff’s formula linking these terms together this way permits us to ask what the opposite of knowledge is and whether analogous principles of hierarchy, process, and pyramiding apply to it. The inversion of the DataInformation-Knowledge-Wisdom hierarchy produces a series of opposing terms (including misinformation, error, ignorance, and stupidity) but not exactly a chain or a pyramid. Examining the connections between these phenomena contributes to our understanding of the contours and limits of knowledge. This presentation will revisit the Data-Information-Knowledge-Wisdom hierarchy linking these concepts together as stages of a single developmental process, with the aim of building a taxonomy for a postulated opposite of knowledge, which I will call ‘nonknowledge’. Concepts of data, information, knowledge, and wisdom are the building blocks of library and information science. Discussions and definitions of these terms pervade the literature from introductory textbooks to theoretical research articles (see Zins, 2007). Expressions linking some of these concepts predate the development of information science as a field of study (Sharma 2008). But the first to put all the terms into a single formula was Russell Lincoln Ackoff, in 1989. Ackoff posited a hierarchy at the top of which lay wisdom, and below that understanding, knowledge, information, and data, in that order. Furthermore, he wrote that “each of these includes the categories that fall below it,” and estimated that “on average about forty percent of the human mind consists of data, thirty percent information, twenty percent knowledge, ten percent understanding, and virtually no wisdom” (Ackoff, 1989, 3). This phraseology allows us to view his model as a pyramid, and indeed it has been likened to one ever since (Rowley, 2007; see figure 1). (‘Understanding’ is omitted, since subsequent formulations have not picked up on it.) Ackoff was a management consultant and former professor of management science at the Wharton School specializing in operations research and organizational theory. His article formulating what is now commonly called the Data-InformationKnowledge-Wisdom hierarchy (or DIKW for short) was first given in 1988 as a presidential address to the International Society for General Systems Research. This background may help explain his approach. Data in his terms are the product of observations, and are of no value until they are processed into a usable form to become information. Information is contained in answers to questions. Knowledge, the next layer, further refines information by making “possible the transformation of information into instructions. It makes control of a system possible” (Ackoff, 1989, 4), and that enables one to make it work efficiently. A managerial rather than scholarly perspective runs through Ackoff’s entire hierarchy, so that “understanding” for him",
"title": ""
},
{
"docid": "76c7b343d2f03b64146a0d6ed2d60668",
"text": "Three important stages within automated 3D object reconstruction via multi-image convergent photogrammetry are image pre-processing, interest point detection for feature-based matching and triangular mesh generation. This paper investigates approaches to each of these. The Wallis filter is initially examined as a candidate image pre-processor to enhance the performance of the FAST interest point operator. The FAST algorithm is then evaluated as a potential means to enhance the speed, robustness and accuracy of interest point detection for subsequent feature-based matching. Finally, the Poisson Surface Reconstruction algorithm for wireframe mesh generation of objects with potentially complex 3D surface geometry is evaluated. The outcomes of the investigation indicate that the Wallis filter, FAST interest operator and Poisson Surface Reconstruction algorithms present distinct benefits in the context of automated image-based object reconstruction. The reported investigation has advanced the development of an automatic procedure for high-accuracy point cloud generation in multi-image networks, where robust orientation and 3D point determination has enabled surface measurement and visualization to be implemented within a single software system.",
"title": ""
},
{
"docid": "b8d63090ea7d3302c71879ea4d11fde5",
"text": "We study the problem of how to distribute the training of large-scale deep learning models in the parallel computing environment. We propose a new distributed stochastic optimization method called Elastic Averaging SGD (EASGD). We analyze the convergence rate of the EASGD method in the synchronous scenario and compare its stability condition with the existing ADMM method in the round-robin scheme. An asynchronous and momentum variant of the EASGD method is applied to train deep convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Our approach accelerates the training and furthermore achieves better test accuracy. It also requires a much smaller amount of communication than other common baseline approaches such as the DOWNPOUR method. We then investigate the limit in speedup of the initial and the asymptotic phase of the mini-batch SGD, the momentum SGD, and the EASGD methods. We find that the spread of the input data distribution has a big impact on their initial convergence rate and stability region. We also find a surprising connection between the momentum SGD and the EASGD method with a negative moving average rate. A non-convex case is also studied to understand when EASGD can get trapped by a saddle point. Finally, we scale up the EASGD method by using a tree structured network topology. We show empirically its advantage and challenge. We also establish a connection between the EASGD and the DOWNPOUR method with the classical Jacobi and the Gauss-Seidel method, thus unifying a class of distributed stochastic optimization methods.",
"title": ""
},
{
"docid": "7d33ba30fd30dce2cd4a3f5558a8c0ba",
"text": "It has long been conjectured that hypothesis spaces suitable for data that is compositional in nature, such as text or images, may be more efficiently represented with deep hierarchical architectures than with shallow ones. Despite the vast empirical evidence, formal arguments to date are limited and do not capture the kind of networks used in practice. Using tensor factorization, we derive a universal hypothesis space implemented by an arithmetic circuit over functions applied to local data structures (e.g. image patches). The resulting networks first pass the input through a representation layer, and then proceed with a sequence of layers comprising sum followed by product-pooling, where sum corresponds to the widely used convolution operator. The hierarchical structure of networks is born from factorizations of tensors based on the linear weights of the arithmetic circuits. We show that a shallow network corresponds to a rank-1 decomposition, whereas a deep network corresponds to a Hierarchical Tucker (HT) decomposition. Log-space computation for numerical stability transforms the networks into SimNets.",
"title": ""
},
{
"docid": "d89a5b253d188c28aa64facd3fef8b95",
"text": "This paper presents a method for decomposing long, complex consumer health questions. Our approach largely decomposes questions using their syntactic structure, recognizing independent questions embedded in clauses, as well as coordinations and exemplifying phrases. Additionally, we identify elements specific to disease-related consumer health questions, such as the focus disease and background information. To achieve this, our approach combines rank-and-filter machine learning methods with rule-based methods. Our results demonstrate significant improvements over the heuristic methods typically employed for question decomposition that rely only on the syntactic parse tree.",
"title": ""
},
{
"docid": "6d0aba91efbe627d8d98c7f49c34fe3d",
"text": "The R language, from the point of view of language design and implementation, is a unique combination of various programming language concepts. It has functional characteristics like lazy evaluation of arguments, but also allows expressions to have arbitrary side effects. Many runtime data structures, for example variable scopes and functions, are accessible and can be modified while a program executes. Several different object models allow for structured programming, but the object models can interact in surprising ways with each other and with the base operations of R. \n R works well in practice, but it is complex, and it is a challenge for language developers trying to improve on the current state-of-the-art, which is the reference implementation -- GNU R. The goal of this work is to demonstrate that, given the right approach and the right set of tools, it is possible to create an implementation of the R language that provides significantly better performance while keeping compatibility with the original implementation. \n In this paper we describe novel optimizations backed up by aggressive speculation techniques and implemented within FastR, an alternative R language implementation, utilizing Truffle -- a JVM-based language development framework developed at Oracle Labs. We also provide experimental evidence demonstrating effectiveness of these optimizations in comparison with GNU R, as well as Renjin and TERR implementations of the R language.",
"title": ""
}
] | scidocsrr |
8c6fd0aedbea7938ae0b08297b62d4a7 | Screening for Depression Patients in Family Medicine | [
{
"docid": "f84f279b6ef3b112a0411f5cba82e1b0",
"text": "PHILADELPHIA The difficulties inherent in obtaining consistent and adequate diagnoses for the purposes of research and therapy have been pointed out by a number of authors. Pasamanick12 in a recent article viewed the low interclinician agreement on diagnosis as an indictment of the present state of psychiatry and called for \"the development of objective, measurable and verifiable criteria of classification based not on personal or parochial considerations, buton behavioral and other objectively measurable manifestations.\" Attempts by other investigators to subject clinical observations and judgments to objective measurement have resulted in a wide variety of psychiatric rating ~ c a l e s . ~ J ~ These have been well summarized in a review article by Lorr l1 on \"Rating Scales and Check Lists for the E v a 1 u a t i o n of Psychopathology.\" In the area of psychological testing, a variety of paper-andpencil tests have been devised for the purpose of measuring specific personality traits; for example, the Depression-Elation Test, devised by Jasper in 1930. This report describes the development of an instrument designed to measure the behavioral manifestations of depression. In the planning of the research design of a project aimed at testing certain psychoanalytic formulations of depression, the necessity for establishing an appropriate system for identifying depression was recognized. Because of the reports on the low degree of interclinician agreement on diagnosis,13 we could not depend on the clinical diagnosis, but had to formulate a method of defining depression that would be reliable and valid. The available instruments were not considered adequate for our purposes. The Minnesota Multiphasic Personality Inventory, for example, was not specifically designed",
"title": ""
}
] | [
{
"docid": "5945132041b353b72af11e88b6ba5b97",
"text": "Oblivious RAM (ORAM) protocols are powerful techniques that hide a client’s data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.",
"title": ""
},
{
"docid": "38666c5299ee67e336dc65f23f528a56",
"text": "Different modalities of magnetic resonance imaging (MRI) can indicate tumor-induced tissue changes from different perspectives, thus benefit brain tumor segmentation when they are considered together. Meanwhile, it is always interesting to examine the diagnosis potential from single modality, considering the cost of acquiring multi-modality images. Clinically, T1-weighted MRI is the most commonly used MR imaging modality, although it may not be the best option for contouring brain tumor. In this paper, we investigate whether synthesizing FLAIR images from T1 could help improve brain tumor segmentation from the single modality of T1. This is achieved by designing a 3D conditional Generative Adversarial Network (cGAN) for FLAIR image synthesis and a local adaptive fusion method to better depict the details of the synthesized FLAIR images. The proposed method can effectively handle the segmentation task of brain tumors that vary in appearance, size and location across samples.",
"title": ""
},
{
"docid": "28b2bbcfb8960ff40f2fe456a5b00729",
"text": "This paper presents an adaptation of Lesk’s dictionary– based word sense disambiguation algorithm. Rather than using a standard dictionary as the source of glosses for our approach, the lexical database WordNet is employed. This provides a rich hierarchy of semantic relations that our algorithm can exploit. This method is evaluated using the English lexical sample data from the Senseval-2 word sense disambiguation exercise, and attains an overall accuracy of 32%. This represents a significant improvement over the 16% and 23% accuracy attained by variations of the Lesk algorithm used as benchmarks during the Senseval-2 comparative exercise among word sense disambiguation",
"title": ""
},
{
"docid": "07af60525d625fd50e75f61dca4107db",
"text": "Spell checking is a well-known task in Natural Language Processing. Nowadays, spell checkers are an important component of a number of computer software such as web browsers, word processors and others. Spelling error detection and correction is the process that will check the spelling of words in a document, and in occurrence of any error, list out the correct spelling in the form of suggestions. This survey paper covers different spelling error detection and correction techniques in various languages. KeywordsNLP, Spell Checker, Spelling Errors, Error detection techniques, Error correction techniques.",
"title": ""
},
{
"docid": "d6c34d138692851efdbb807a89d0fcca",
"text": "Vaccine hesitancy reflects concerns about the decision to vaccinate oneself or one's children. There is a broad range of factors contributing to vaccine hesitancy, including the compulsory nature of vaccines, their coincidental temporal relationships to adverse health outcomes, unfamiliarity with vaccine-preventable diseases, and lack of trust in corporations and public health agencies. Although vaccination is a norm in the U.S. and the majority of parents vaccinate their children, many do so amid concerns. The proportion of parents claiming non-medical exemptions to school immunization requirements has been increasing over the past decade. Vaccine refusal has been associated with outbreaks of invasive Haemophilus influenzae type b disease, varicella, pneumococcal disease, measles, and pertussis, resulting in the unnecessary suffering of young children and waste of limited public health resources. Vaccine hesitancy is an extremely important issue that needs to be addressed because effective control of vaccine-preventable diseases generally requires indefinite maintenance of extremely high rates of timely vaccination. The multifactorial and complex causes of vaccine hesitancy require a broad range of approaches on the individual, provider, health system, and national levels. These include standardized measurement tools to quantify and locate clustering of vaccine hesitancy and better understand issues of trust; rapid, independent, and transparent review of an enhanced and appropriately funded vaccine safety system; adequate reimbursement for vaccine risk communication in doctors' offices; and individually tailored messages for parents who have vaccine concerns, especially first-time pregnant women. The potential of vaccines to prevent illness and save lives has never been greater. Yet, that potential is directly dependent on parental acceptance of vaccines, which requires confidence in vaccines, healthcare providers who recommend and administer vaccines, and the systems to make sure vaccines are safe.",
"title": ""
},
{
"docid": "a1fcf0d2b9a619c0a70b210c70cf4bfd",
"text": "This paper demonstrates a reliable navigation of a mobile robot in outdoor environment. We fuse differential GPS and odometry data using the framework of extended Kalman filter to localize a mobile robot. And also, we propose an algorithm to detect curbs through the laser range finder. An important feature of road environment is the existence of curbs. The mobile robot builds the map of the curbs of roads and the map is used for tracking and localization. The navigation system for the mobile robot consists of a mobile robot and a control station. The mobile robot sends the image data from a camera to the control station. The control station receives and displays the image data and the teleoperator commands the mobile robot based on the image data. Since the image data does not contain enough data for reliable navigation, a hybrid strategy for reliable mobile robot in outdoor environment is suggested. When the mobile robot is faced with unexpected obstacles or the situation that, if it follows the command, it can happen to collide, it sends a warning message to the teleoperator and changes the mode from teleoperated to autonomous to avoid the obstacles by itself. After avoiding the obstacles or the collision situation, the mode of the mobile robot is returned to teleoperated mode. We have been able to confirm that the appropriate change of navigation mode can help the teleoperator perform reliable navigation in outdoor environment through experiments in the road.",
"title": ""
},
{
"docid": "0ce06f95b1dafcac6dad4413c8b81970",
"text": "User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between the segments is labeled with a relation type and a similarity score. We present a publicly available dataset of sentence pairs annotated following the formalization. We then develop a system trained on this dataset which, given a sentence pair, explains what is similar and different, in the form of graded and typed segment alignments. When evaluated on the dataset, the system performs better than an informed baseline, showing that the dataset and task are well-defined and feasible. Most importantly, two user studies show how the system output can be used to automatically produce explanations in natural language. Users performed better when having access to the explanations, providing preliminary evidence that our dataset and method to automatically produce explanations is useful in real applications.",
"title": ""
},
{
"docid": "b1f3f0dac49d6613f381b30ebf5b0ad7",
"text": "In the current Web scenario a video browsing tool that produces on-the-fly storyboards is more and more a need. Video summary techniques can be helpful but, due to their long processing time, they are usually unsuitable for on-the-fly usage. Therefore, it is common to produce storyboards in advance, penalizing users customization. The lack of customization is more and more critical, as users have different demands and might access the Web with several different networking and device technologies. In this paper we propose STIMO, a summarization technique designed to produce on-the-fly video storyboards. STIMO produces still and moving storyboards and allows advanced users customization (e.g., users can select the storyboard length and the maximum time they are willing to wait to get the storyboard). STIMO is based on a fast clustering algorithm that selects the most representative video contents using HSV frame color distribution. Experimental results show that STIMO produces storyboards with good quality and in a time that makes on-the-fly usage possible.",
"title": ""
},
{
"docid": "16afaad8bfdc64f9d97e9829f2029bc6",
"text": "The combination of limited individual information and costly information acquisition in markets for experience goods leads us to believe that significant peer effects drive demand in these markets. In this paper we model the effects of peers on the demand patterns of products in the market experience goods microfunding. By analyzing data from an online crowdfunding platform from 2006 to 2010 we are able to ascertain that peer effects, and not network externalities, influence consumption.",
"title": ""
},
{
"docid": "c6283ee48fd5115d28e4ea0812150f25",
"text": "Stochastic regular bi-languages has been recently proposed to model the joint probability distributions appearing in some statistical approaches of Spoken Dialog Systems. To this end a deterministic and probabilistic finite state biautomaton was defined to model the distribution probabilities for the dialog model. In this work we propose and evaluate decision strategies over the defined probabilistic finite state bi-automaton to select the best system action at each step of the interaction. To this end the paper proposes some heuristic decision functions that consider both action probabilities learn from a corpus and number of known attributes at running time. We compare either heuristics based on a single next turn or based on entire paths over the automaton. Experimental evaluation was carried out to test the model and the strategies over the Let’s Go Bus Information system. The results obtained show good system performances. They also show that local decisions can lead to better system performances than best path-based decisions due to the unpredictability of the user behaviors.",
"title": ""
},
{
"docid": "dffe5305558e10a0ceba499f3a01f4d8",
"text": "A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods. PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. Multi-view data vectors with many-to-many associations are transformed by neural networks to feature vectors in a shared space, and the probability of new association between two data vectors is modeled by the inner product of their feature vectors. While existing multi-view feature learning techniques can treat only either of many-to-many association or non-linear transformation, PMvGE can treat both simultaneously. By combining Mercer’s theorem and the universal approximation theorem, we prove that PMvGE learns a wide class of similarity measures across views. Our likelihoodbased estimator enables efficient computation of non-linear transformations of data vectors in largescale datasets by minibatch SGD, and numerical experiments illustrate that PMvGE outperforms existing multi-view methods.",
"title": ""
},
{
"docid": "477769b83e70f1d46062518b1d692664",
"text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.",
"title": ""
},
{
"docid": "83e0fdbaa10c01aecdbe9cf853511230",
"text": "We use an online travel context to test three aspects of communication",
"title": ""
},
{
"docid": "58af6565b74f68371a1c61eab44a72c5",
"text": "Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.",
"title": ""
},
{
"docid": "ea937e1209c270a7b6ab2214e0989fed",
"text": "With current projections regarding the growth of Internet sales, online retailing raises many questions about how to market on the Net. While convenience impels consumers to purchase items on the web, quality remains a significant factor in deciding where to shop online. The competition is increasing and personalization is considered to be the competitive advantage that will determine the winners in the market of online shopping in the following years. Recommender systems are a means of personalizing a site and a solution to the customer’s information overload problem. As such, many e-commerce sites already use them to facilitate the buying process. In this paper we present a recommender system for online shopping focusing on the specific characteristics and requirements of electronic retailing. We use a hybrid model supporting dynamic recommendations, which eliminates the problems the underlying techniques have when applied solely. At the end, we conclude with some ideas for further development and research in this area.",
"title": ""
},
{
"docid": "fa9571673fe848d1d119e2d49f21d28d",
"text": "Convolutional Neural Networks (CNNs) trained on large scale RGB databases have become the secret sauce in the majority of recent approaches for object categorization from RGB-D data. Thanks to colorization techniques, these methods exploit the filters learned from 2D images to extract meaningful representations in 2.5D. Still, the perceptual signature of these two kind of images is very different, with the first usually strongly characterized by textures, and the second mostly by silhouettes of objects. Ideally, one would like to have two CNNs, one for RGB and one for depth, each trained on a suitable data collection, able to capture the perceptual properties of each channel for the task at hand. This has not been possible so far, due to the lack of a suitable depth database. This paper addresses this issue, proposing to opt for synthetically generated images rather than collecting by hand a 2.5D large scale database. While being clearly a proxy for real data, synthetic images allow to trade quality for quantity, making it possible to generate a virtually infinite amount of data. We show that the filters learned from such data collection, using the very same architecture typically used on visual data, learns very different filters, resulting in depth features (a) able to better characterize the different facets of depth images, and (b) complementary with respect to those derived from CNNs pre-trained on 2D datasets. Experiments on two publicly available databases show the power of our approach.",
"title": ""
},
{
"docid": "b54ca99ae8818517d5c04100bad0f3b4",
"text": "Finding the sparsest solutions to a tensor complementarity problem is generally NP-hard due to the nonconvexity and noncontinuity of the involved 0 norm. In this paper, a special type of tensor complementarity problems with Z -tensors has been considered. Under some mild conditions, we show that to pursuit the sparsest solutions is equivalent to solving polynomial programming with a linear objective function. The involved conditions guarantee the desired exact relaxation and also allow to achieve a global optimal solution to the relaxednonconvexpolynomial programming problem. Particularly, in comparison to existing exact relaxation conditions, such as RIP-type ones, our proposed conditions are easy to verify. This research was supported by the National Natural Science Foundation of China (11301022, 11431002), the State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University (RCS2014ZT20, RCS2014ZZ01), and the Hong Kong Research Grant Council (Grant No. PolyU 502111, 501212, 501913 and 15302114). B Ziyan Luo [email protected] Liqun Qi [email protected] Naihua Xiu [email protected] 1 State Key Laboratory of Rail Traffic Control and Safety, Beijing Jiaotong University, Beijing 100044, People’s Repubic of China 2 Department of Applied Mathematics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, People’s Repubic of China 3 Department of Mathematics, School of Science, Beijing Jiaotong University, Beijing, People’s Repubic of China 123 Author's personal copy",
"title": ""
},
{
"docid": "fbcad29c075e8d58b9f6df5ee70aa0be",
"text": "We present a motion planning framework for autonomous on-road driving considering both the uncertainty caused by an autonomous vehicle and other traffic participants. The future motion of traffic participants is predicted using a local planner, and the uncertainty along the predicted trajectory is computed based on Gaussian propagation. For the autonomous vehicle, the uncertainty from localization and control is estimated based on a Linear-Quadratic Gaussian (LQG) framework. Compared with other safety assessment methods, our framework allows the planner to avoid unsafe situations more efficiently, thanks to the direct uncertainty information feedback to the planner. We also demonstrate our planner's ability to generate safer trajectories compared to planning only with a LQG framework.",
"title": ""
},
{
"docid": "6fb8b461530af2c56ec0fac36dd85d3a",
"text": "Psoriatic arthritis is one of the spondyloarthritis. It is a disease of clinical heterogenicity, which may affect peripheral joints, as well as axial spine, with presence of inflammatory lesions in soft tissue, in a form of dactylitis and enthesopathy. Plain radiography remains the basic imaging modality for PsA diagnosis, although early inflammatory changes affecting soft tissue and bone marrow cannot be detected with its use, or the image is indistinctive. Typical radiographic features of PsA occur in an advanced disease, mainly within the synovial joints, but also in fibrocartilaginous joints, such as sacroiliac joints, and additionally in entheses of tendons and ligaments. Moll and Wright classified PsA into 5 subtypes: asymmetric oligoarthritis, symmetric polyarthritis, arthritis mutilans, distal interphalangeal arthritis of the hands and feet and spinal column involvement. In this part of the paper we discuss radiographic features of the disease. The next one will address magnetic resonance imaging and ultrasonography.",
"title": ""
}
] | scidocsrr |
64330d665d11d79b3ab1fa880ebde586 | Liveness Detection Using Gaze Collinearity | [
{
"docid": "2e3f05ee44b276b51c1b449e4a62af94",
"text": "We make some simple extensions to the Active Shape Model of Cootes et al. [4], and use it to locate features in frontal views of upright faces. We show on independent test data that with the extensions the Active Shape Model compares favorably with more sophisticated methods. The extensions are (i) fitting more landmarks than are actually needed (ii) selectively using twoinstead of one-dimensional landmark templates (iii) adding noise to the training set (iv) relaxing the shape model where advantageous (v) trimming covariance matrices by setting most entries to zero, and (vi) stacking two Active Shape Models in series.",
"title": ""
},
{
"docid": "fe33ff51ca55bf745bdcdf8ee02e2d36",
"text": "A robust face detection technique along with mouth localization, processing every frame in real time (video rate), is presented. Moreover, it is exploited for motion analysis onsite to verify \"liveness\" as well as to achieve lip reading of digits. A methodological novelty is the suggested quantized angle features (\"quangles\") being designed for illumination invariance without the need for preprocessing (e.g., histogram equalization). This is achieved by using both the gradient direction and the double angle direction (the structure tensor angle), and by ignoring the magnitude of the gradient. Boosting techniques are applied in a quantized feature space. A major benefit is reduced processing time (i.e., that the training of effective cascaded classifiers is feasible in very short time, less than 1 h for data sets of order 104). Scale invariance is implemented through the use of an image scale pyramid. We propose \"liveness\" verification barriers as applications for which a significant amount of computation is avoided when estimating motion. Novel strategies to avert advanced spoofing attempts (e.g., replayed videos which include person utterances) are demonstrated. We present favorable results on face detection for the YALE face test set and competitive results for the CMU-MIT frontal face test set as well as on \"liveness\" verification barriers.",
"title": ""
},
{
"docid": "b40129a15767189a7a595db89c066cf8",
"text": "To increase reliability of face recognition system, the system must be able to distinguish real face from a copy of face such as a photograph. In this paper, we propose a fast and memory efficient method of live face detection for embedded face recognition system, based on the analysis of the movement of the eyes. We detect eyes in sequential input images and calculate variation of each eye region to determine whether the input face is a real face or not. Experimental results show that the proposed approach is competitive and promising for live face detection. Keywords—Liveness Detection, Eye detection, SQI.",
"title": ""
}
] | [
{
"docid": "0a08814f1f5f5f489f756df9ad5be051",
"text": "Jump height is a critical aspect of volleyball players' blocking and attacking performance. Although previous studies demonstrated that creatine monohydrate supplementation (CrMS) improves jumping performance, none have yet evaluated its effect among volleyball players with proficient jumping skills. We examined the effect of 4 wk of CrMS on 1 RM spike jump (SJ) and repeated block jump (BJ) performance among 12 elite males of the Sherbrooke University volleyball team. Using a parallel, randomized, double-blind protocol, participants were supplemented with a placebo or creatine solution for 28 d, at a dose of 20 g/d in days 1-4, 10 g/d on days 5-6, and 5 g/d on days 7-28. Pre- and postsupplementation, subjects performed the 1 RM SJ test, followed by the repeated BJ test (10 series of 10 BJs; 3 s interval between jumps; 2 min recovery between series). Due to injuries (N = 2) and outlier data (N = 2), results are reported for eight subjects. Following supplementation, both groups improved SJ and repeated BJ performance. The change in performance during the 1 RM SJ test and over the first two repeated BJ series was unclear between groups. For series 3-6 and 7-10, respectively, CrMS further improved repeated BJ performance by 2.8% (likely beneficial change) and 1.9% (possibly beneficial change), compared with the placebo. Percent repeated BJ decline in performance across the 10 series did not differ between groups pre- and postsupplementation. In conclusion, CrMS likely improved repeated BJ height capability without influencing the magnitude of muscular fatigue in these elite, university-level volleyball players.",
"title": ""
},
{
"docid": "162ad6b8d48f5d6c76067d25b320a947",
"text": "Image Understanding is fundamental to systems that need to extract contents and infer concepts from images. In this paper, we develop an architecture for understanding images, through which a system can recognize the content and the underlying concepts of an image and, reason and answer questions about both using a visual module, a reasoning module, and a commonsense knowledge base. In this architecture, visual data combines with background knowledge and; iterates through visual and reasoning modules to answer questions about an image or to generate a textual description of an image. We first provide motivations of such a Deep Image Understanding architecture and then, we describe the necessary components it should include. We also introduce our own preliminary implementation of this architecture and empirically show how this more generic implementation compares with a recent end-to-end Neural approach on specific applications. We address the knowledge-representation challenge in such an architecture by representing an image using a directed labeled graph (called Scene Description Graph). Our implementation uses generic visual recognition techniques and commonsense reasoning1 to extract such graphs from images. Our experiments show that the extracted graphs capture the syntactic and semantic content of an image with reasonable accuracy.",
"title": ""
},
{
"docid": "9c98685d50238cebb1e23e00201f8c09",
"text": "A frequently asked questions (FAQ) retrieval system improves the access to information by allowing users to pose natural language queries over an FAQ collection. From an information retrieval perspective, FAQ retrieval is a challenging task, mainly because of the lexical gap that exists between a query and an FAQ pair, both of which are typically very short. In this work, we explore the use of supervised learning to rank to improve the performance of domain-specific FAQ retrieval. While supervised learning-to-rank models have been shown to yield effective retrieval performance, they require costly human-labeled training data in the form of document relevance judgments or question paraphrases. We investigate how this labeling effort can be reduced using a labeling strategy geared toward the manual creation of query paraphrases rather than the more time-consuming relevance judgments. In particular, we investigate two such strategies, and test them by applying supervised ranking models to two domain-specific FAQ retrieval data sets, showcasing typical FAQ retrieval scenarios. Our experiments show that supervised ranking models can yield significant improvements in the precision-at-rank-5 measure compared to unsupervised baselines. Furthermore, we show that a supervised model trained using data labeled via a low-effort paraphrase-focused strategy has the same performance as that of the same model trained using fully labeled data, indicating that the strategy is effective at reducing the labeling effort while retaining the performance gains of the supervised approach. To encourage further research on FAQ retrieval we make our FAQ retrieval data set publicly available.",
"title": ""
},
{
"docid": "3196b8017cfb9a8cbfef0e892c508d05",
"text": "The nuclear envelope is a physical barrier that isolates the cellular DNA from the rest of the cell, thereby limiting pathogen invasion. The Human Immunodeficiency Virus (HIV) has a remarkable ability to enter the nucleus of non-dividing target cells such as lymphocytes, macrophages and dendritic cells. While this step is critical for replication of the virus, it remains one of the less understood aspects of HIV infection. Here, we review the viral and host factors that favor or inhibit HIV entry into the nucleus, including the viral capsid, integrase, the central viral DNA flap, and the host proteins CPSF6, TNPO3, Nucleoporins, SUN1, SUN2, Cyclophilin A and MX2. We review recent perspectives on the mechanism of action of these factors, and formulate fundamental questions that remain. Overall, these findings deepen our understanding of HIV nuclear import and strengthen the favorable position of nuclear HIV entry for antiviral targeting.",
"title": ""
},
{
"docid": "faea285dfac31a520e23c0a3ee06cea6",
"text": "Since 2006, Alberts and Dorofee have led MSCE with a focus on returning risk management to its original intent—supporting effective management decisions that lead to program success. They began rethinking the traditional approaches to risk management, which led to the development of SEI Mosaic, a suite of methodologies that approach managing risk from a systemic view across the life cycle and supply chain. Using a systemic risk management approach enables program managers to develop and implement strategic, high-leverage mitigation solutions that align with mission and objectives.",
"title": ""
},
{
"docid": "d470122d50dbb118ae9f3068998f8e14",
"text": "Tumor heterogeneity presents a challenge for inferring clonal evolution and driver gene identification. Here, we describe a method for analyzing the cancer genome at a single-cell nucleotide level. To perform our analyses, we first devised and validated a high-throughput whole-genome single-cell sequencing method using two lymphoblastoid cell line single cells. We then carried out whole-exome single-cell sequencing of 90 cells from a JAK2-negative myeloproliferative neoplasm patient. The sequencing data from 58 cells passed our quality control criteria, and these data indicated that this neoplasm represented a monoclonal evolution. We further identified essential thrombocythemia (ET)-related candidate mutations such as SESN2 and NTRK1, which may be involved in neoplasm progression. This pilot study allowed the initial characterization of the disease-related genetic architecture at the single-cell nucleotide level. Further, we established a single-cell sequencing method that opens the way for detailed analyses of a variety of tumor types, including those with high genetic complex between patients.",
"title": ""
},
{
"docid": "b09cacfb35cd02f6a5345c206347c6ae",
"text": "Facebook, as one of the most popular social networking sites among college students, provides a platform for people to manage others' impressions of them. People tend to present themselves in a favorable way on their Facebook profile. This research examines the impact of using Facebook on people's perceptions of others' lives. It is argued that those with deeper involvement with Facebook will have different perceptions of others than those less involved due to two reasons. First, Facebook users tend to base judgment on examples easily recalled (the availability heuristic). Second, Facebook users tend to attribute the positive content presented on Facebook to others' personality, rather than situational factors (correspondence bias), especially for those they do not know personally. Questionnaires, including items measuring years of using Facebook, time spent on Facebook each week, number of people listed as their Facebook \"friends,\" and perceptions about others' lives, were completed by 425 undergraduate students taking classes across various academic disciplines at a state university in Utah. Surveys were collected during regular class period, except for two online classes where surveys were submitted online. The multivariate analysis indicated that those who have used Facebook longer agreed more that others were happier, and agreed less that life is fair, and those spending more time on Facebook each week agreed more that others were happier and had better lives. Furthermore, those that included more people whom they did not personally know as their Facebook \"friends\" agreed more that others had better lives.",
"title": ""
},
{
"docid": "4a18861ce15cfae3eaa2519d2fdc98c8",
"text": "This paper presents deadlock prevention are use to solve the deadlock problem of flexible manufacturing systems (FMS). Petri nets have been successfully as one of the most powerful tools for modeling of FMS. Their modeling power and a mathematical arsenal supporting the analysis of the modeled systems stimulate the increasing interest in Petri nets. With the structural object of Petri nets, siphons are important in the analysis and control of deadlocks in Petri nets (PNs) excellent properties. The deadlock prevention method are caused by the unmarked siphons, during the Petri nets are an effective way to model, analyze, simulation and control deadlocks in FMS is presented in this work. The characterization of special structural elements in Petri net so-called siphons has been a major approach for the investigation of deadlock-freeness in the center of FMS. The siphons are structures which allow for some implications on the net's can be well controlled by adding a control place (called monitor) for each uncontrolled siphon in the net in order to become deadlock-free situation in the system. Finally, We proposed method of modeling, simulation, control of FMS by using Petri nets, where deadlock analysis have Production line in parallel processing is demonstrate by a practical example used Petri Net-tool in MATLAB, is effective, and explicitly although its off-line computation.",
"title": ""
},
{
"docid": "2a384fe57f79687cba8482cabfb4243b",
"text": "The Semantic Web graph is growing at an incredible pace, enabling opportunities to discover new knowledge by interlinking and analyzing previously unconnected data sets. This confronts researchers with a conundrum: Whilst the data is available the programming models that facilitate scalability and the infrastructure to run various algorithms on the graph are missing. Some use MapReduce – a good solution for many problems. However, even some simple iterative graph algorithms do not map nicely to that programming model requiring programmers to shoehorn their problem to the MapReduce model. This paper presents the Signal/Collect programming model for synchronous and asynchronous graph algorithms. We demonstrate that this abstraction can capture the essence of many algorithms on graphs in a concise and elegant way by giving Signal/Collect adaptations of various relevant algorithms. Furthermore, we built and evaluated a prototype Signal/Collect framework that executes algorithms in our programming model. We empirically show that this prototype transparently scales and that guiding computations by scoring as well as asynchronicity can greatly improve the convergence of some example algorithms. We released the framework under the Apache License 2.0 (at http://www.ifi.uzh.ch/ddis/research/sc).",
"title": ""
},
{
"docid": "6241cb482e386435be2e33caf8d94216",
"text": "A fog radio access network (F-RAN) is studied, in which $K_T$ edge nodes (ENs) connected to a cloud server via orthogonal fronthaul links, serve $K_R$ users through a wireless Gaussian interference channel. Both the ENs and the users have finite-capacity cache memories, which are filled before the user demands are revealed. While a centralized placement phase is used for the ENs, which model static base stations, a decentralized placement is leveraged for the mobile users. An achievable transmission scheme is presented, which employs a combination of interference alignment, zero-forcing and interference cancellation techniques in the delivery phase, and the \\textit{normalized delivery time} (NDT), which captures the worst-case latency, is analyzed.",
"title": ""
},
{
"docid": "d994b23ea551f23215232c0771e7d6b3",
"text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).",
"title": ""
},
{
"docid": "7b999aaaa1374499b910c3f7d0918484",
"text": "Research in face recognition has largely been divided between those projects concerned with front-end image processing and those projects concerned with memory for familiar people. These perceptual and cognitive programmes of research have proceeded in parallel, with only limited mutual influence. In this paper we present a model of human face recognition which combines both a perceptual and a cognitive component. The perceptual front-end is based on principal components analysis of images, and the cognitive back-end is based on a simple interactive activation and competition architecture. We demonstrate that this model has a much wider predictive range than either perceptual or cognitive models alone, and we show that this type of combination is necessary in order to analyse some important effects in human face recognition. In sum, the model takes varying images of \"known\" faces and delivers information about these people.",
"title": ""
},
{
"docid": "1503d2a235b2ce75516d18cdea42bbb5",
"text": "Phosphatidylinositol-3,4,5-trisphosphate (PtdIns(3,4,5)P3 or PIP3) mediates signalling pathways as a second messenger in response to extracellular signals. Although primordial functions of phospholipids and RNAs have been hypothesized in the ‘RNA world’, physiological RNA–phospholipid interactions and their involvement in essential cellular processes have remained a mystery. We explicate the contribution of lipid-binding long non-coding RNAs (lncRNAs) in cancer cells. Among them, long intergenic non-coding RNA for kinase activation (LINK-A) directly interacts with the AKT pleckstrin homology domain and PIP3 at the single-nucleotide level, facilitating AKT–PIP3 interaction and consequent enzymatic activation. LINK-A-dependent AKT hyperactivation leads to tumorigenesis and resistance to AKT inhibitors. Genomic deletions of the LINK-A PIP3-binding motif dramatically sensitized breast cancer cells to AKT inhibitors. Furthermore, meta-analysis showed the correlation between LINK-A expression and incidence of a single nucleotide polymorphism (rs12095274: A > G), AKT phosphorylation status, and poor outcomes for breast and lung cancer patients. PIP3-binding lncRNA modulates AKT activation with broad clinical implications.",
"title": ""
},
{
"docid": "e51fe12eecec4116a9a3b7f4c2281938",
"text": "The use of wireless technologies in automation systems offers attractive benefits, but introduces a number of new technological challenges. The paper discusses these aspects for home and building automation applications. Relevant standards are surveyed. A wireless extension to KNX/EIB based on tunnelling over IEEE 802.15.4 is presented. The design emulates the properties of the KNX/EIB wired medium via wireless communication, allowing a seamless extension. Furthermore, it is geared towards zero-configuration and supports the easy integration of protocol security.",
"title": ""
},
{
"docid": "e6b4097ead39f9b5144e2bd8551762ed",
"text": "Thanks to advances in medical imaging technologies and numerical methods, patient-specific modelling is more and more used to improve diagnosis and to estimate the outcome of surgical interventions. It requires the extraction of the domain of interest from the medical scans of the patient, as well as the discretisation of this geometry. However, extracting smooth multi-material meshes that conform to the tissue boundaries described in the segmented image is still an active field of research. We propose to solve this issue by combining an implicit surface reconstruction method with a multi-region mesh extraction scheme. The surface reconstruction algorithm is based on multi-level partition of unity implicit surfaces, which we extended to the multi-material case. The mesh generation algorithm consists in a novel multi-domain version of the marching tetrahedra. It generates multi-region meshes as a set of triangular surface patches consistently joining each other at material junctions. This paper presents this original meshing strategy, starting from boundary points extraction from the segmented data to heterogeneous implicit surface definition, multi-region surface triangulation and mesh adaptation. Results indicate that the proposed approach produces smooth and high-quality triangular meshes with a reasonable geometric accuracy. Hence, the proposed method is well suited for subsequent volume mesh generation and finite element simulations.",
"title": ""
},
{
"docid": "99880fca88bef760741f48166a51ca6f",
"text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.",
"title": ""
},
{
"docid": "6893ce06d616d08cf0a9053dc9ea493d",
"text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.",
"title": ""
},
{
"docid": "103b784d7cc23663584486fa3ca396bb",
"text": "A single, stationary topic model such as latent Dirichlet allocation is inappropriate for modeling corpora that span long time periods, as the popularity of topics is likely to change over time. A number of models that incorporate time have been proposed, but in general they either exhibit limited forms of temporal variation, or require computationally expensive inference methods. In this paper we propose non-parametric Topics over Time (npTOT), a model for time-varying topics that allows an unbounded number of topics and flexible distribution over the temporal variations in those topics’ popularity. We develop a collapsed Gibbs sampler for the proposed model and compare against existing models on synthetic and real document sets.",
"title": ""
},
{
"docid": "dd3efa1bea58934793c7c6a6064e1330",
"text": "This paper gives a broad overview of a complete framework for assessing the predictive uncertainty of scientific computing applications. The framework is complete in the sense that it treats both types of uncertainty (aleatory and epistemic) and incorporates uncertainty due to the form of the model and any numerical approximations used. Aleatory (or random) uncertainties in model inputs are treated using cumulative distribution functions, while epistemic (lack of knowledge) uncertainties are treated as intervals. Approaches for propagating both types of uncertainties through the model to the system response quantities of interest are discussed. Numerical approximation errors (due to discretization, iteration, and round off) are estimated using verification techniques, and the conversion of these errors into epistemic uncertainties is discussed. Model form uncertainties are quantified using model validation procedures, which include a comparison of model predictions to experimental data and then extrapolation of this uncertainty structure to points in the application domain where experimental data do not exist. Finally, methods for conveying the total predictive uncertainty to decision makers are presented.",
"title": ""
}
] | scidocsrr |
419e64d3afee302db4f7fabe52be4e3b | Offline signature verification using classifier combination of HOG and LBP features | [
{
"docid": "7489989ecaa16bc699949608f9ffc8a1",
"text": "A method for conducting off-line handwritten signature verification is described. It works at the global image level and measures the grey level variations in the image using statistical texture features. The co-occurrence matrix and local binary pattern are analysed and used as features. This method begins with a proposed background removal. A histogram is also processed to reduce the influence of different writing ink pens used by signers. Genuine samples and random forgeries have been used to train an SVM model and random and skilled forgeries have been used for testing it. Results are reasonable according to the state-of-the-art and approaches that use the same two databases: MCYT-75 and GPDS100 Corpuses. The combination of the proposed features and those proposed by other authors, based on geometric information, also promises improvements in performance. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "7e1f0cd43cdc9685474e19b7fd65791b",
"text": "Understanding human actions is a key problem in computer vision. However, recognizing actions is only the first step of understanding what a person is doing. In this paper, we introduce the problem of predicting why a person has performed an action in images. This problem has many applications in human activity understanding, such as anticipating or explaining an action. To study this problem, we introduce a new dataset of people performing actions annotated with likely motivations. However, the information in an image alone may not be sufficient to automatically solve this task. Since humans can rely on their lifetime of experiences to infer motivation, we propose to give computer vision systems access to some of these experiences by using recently developed natural language models to mine knowledge stored in massive amounts of text. While we are still far away from fully understanding motivation, our results suggest that transferring knowledge from language into vision can help machines understand why people in images might be performing an action.",
"title": ""
},
{
"docid": "dc2770a8318dd4aa1142efebe5547039",
"text": "The purpose of this study was to describe how reaching onset affects the way infants explore objects and their own bodies. We followed typically developing infants longitudinally from 2 through 5 months of age. At each visit we coded the behaviors infants performed with their hand when an object was attached to it versus when the hand was bare. We found increases in the performance of most exploratory behaviors after the emergence of reaching. These increases occurred both with objects and with bare hands. However, when interacting with objects, infants performed the same behaviors they performed on their bare hands but they performed them more often and in unique combinations. The results support the tenets that: (1) the development of object exploration begins in the first months of life as infants learn to selectively perform exploratory behaviors on their bodies and objects, (2) the onset of reaching is accompanied by significant increases in exploration of both objects and one's own body, (3) infants adapt their self-exploratory behaviors by amplifying their performance and combining them in unique ways to interact with objects.",
"title": ""
},
{
"docid": "f2707d7fcd5d8d9200d4cc8de8ff1042",
"text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.",
"title": ""
},
{
"docid": "f9876540ce148d7b27bab53839f1bf19",
"text": "Recent research endeavors have shown the potential of using feed-forward convolutional neural networks to accomplish fast style transfer for images. In this work, we take one step further to explore the possibility of exploiting a feed-forward network to perform style transfer for videos and simultaneously maintain temporal consistency among stylized video frames. Our feed-forward network is trained by enforcing the outputs of consecutive frames to be both well stylized and temporally consistent. More specifically, a hybrid loss is proposed to capitalize on the content information of input frames, the style information of a given style image, and the temporal information of consecutive frames. To calculate the temporal loss during the training stage, a novel two-frame synergic training mechanism is proposed. Compared with directly applying an existing image style transfer method to videos, our proposed method employs the trained network to yield temporally consistent stylized videos which are much more visually pleasant. In contrast to the prior video style transfer method which relies on time-consuming optimization on the fly, our method runs in real time while generating competitive visual results.",
"title": ""
},
{
"docid": "eb6572344dbaf8e209388f888fba1c10",
"text": "[Purpose] The present study was performed to evaluate the changes in the scapular alignment, pressure pain threshold and pain in subjects with scapular downward rotation after 4 weeks of wall slide exercise or sling slide exercise. [Subjects and Methods] Twenty-two subjects with scapular downward rotation participated in this study. The alignment of the scapula was measured using radiographic analysis (X-ray). Pain and pressure pain threshold were assessed using visual analogue scale and digital algometer. Patients were assessed before and after a 4 weeks of exercise. [Results] In the within-group comparison, the wall slide exercise group showed significant differences in the resting scapular alignment, pressure pain threshold, and pain after four weeks. The between-group comparison showed that there were significant differences between the wall slide group and the sling slide group after four weeks. [Conclusion] The results of this study found that the wall slide exercise may be effective at reducing pain and improving scapular alignment in subjects with scapular downward rotation.",
"title": ""
},
{
"docid": "c39836282acc36e77c95e732f4f1c1bc",
"text": "In this paper, a new dataset, HazeRD, is proposed for benchmarking dehazing algorithms under more realistic haze conditions. HazeRD contains fifteen real outdoor scenes, for each of which five different weather conditions are simulated. As opposed to prior datasets that made use of synthetically generated images or indoor images with unrealistic parameters for haze simulation, our outdoor dataset allows for more realistic simulation of haze with parameters that are physically realistic and justified by scattering theory. All images are of high resolution, typically six to eight megapixels. We test the performance of several state-of-the-art dehazing techniques on HazeRD. The results exhibit a significant difference among algorithms across the different datasets, reiterating the need for more realistic datasets such as ours and for more careful benchmarking of the methods.",
"title": ""
},
{
"docid": "49680e94843e070a5ed0179798f66f33",
"text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.",
"title": ""
},
{
"docid": "c9c44cc22c71d580f4b2a24cd91ac274",
"text": "One of the first steps in the utterance interpretation pipeline of many task-oriented conversational AI systems is to identify user intents and the corresponding slots. Neural sequence labeling models have achieved very high accuracy on these tasks when trained on large amounts of training data. However, collecting this data is very time-consuming and therefore it is unfeasible to collect large amounts of data for many languages. For this reason, it is desirable to make use of existing data in a high-resource language to train models in low-resource languages. In this paper, we investigate the performance of three different methods for cross-lingual transfer learning, namely (1) translating the training data, (2) using cross-lingual pre-trained embeddings, and (3) a novel method of using a multilingual machine translation encoder as contextual word representations. We find that given several hundred training examples in the the target language, the latter two methods outperform translating the training data. Further, in very low-resource settings, we find that multilingual contextual word representations give better results than using crosslingual static embeddings. We release a dataset of around 57k annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) for three task oriented domains at https://fb.me/multilingual_task_oriented_data.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "49d6b3f314b61ace11afc5eea7b652e3",
"text": "Euler diagrams visually represent containment, intersection and exclusion using closed curves. They first appeared several hundred years ago, however, there has been a resurgence in Euler diagram research in the twenty-first century. This was initially driven by their use in visual languages, where they can be used to represent logical expressions diagrammatically. This work lead to the requirement to automatically generate Euler diagrams from an abstract description. The ability to generate diagrams has accelerated their use in information visualization, both in the standard case where multiple grouping of data items inside curves is required and in the area-proportional case where the area of curve intersections is important. As a result, examining the usability of Euler diagrams has become an important aspect of this research. Usability has been investigated by empirical studies, but much research has concentrated on wellformedness, which concerns how curves and other features of the diagram interrelate. This work has revealed the drawability of Euler diagrams under various wellformedness properties and has developed embedding methods that meet these properties. Euler diagram research surveyed in this paper includes theoretical results, generation techniques, transformation methods and the development of automated reasoning systems for Euler diagrams. It also overviews application areas and the ways in which Euler diagrams have been extended.",
"title": ""
},
{
"docid": "db1cdc2a4e3fe26146a1f9c8b0926f9e",
"text": "Sememes are defined as the minimum semantic units of human languages. People have manually annotated lexical sememes for words and form linguistic knowledge bases. However, manual construction is time-consuming and labor-intensive, with significant annotation inconsistency and noise. In this paper, we for the first time explore to automatically predict lexical sememes based on semantic meanings of words encoded by word embeddings. Moreover, we apply matrix factorization to learn semantic relations between sememes and words. In experiments, we take a real-world sememe knowledge base HowNet for training and evaluation, and the results reveal the effectiveness of our method for lexical sememe prediction. Our method will be of great use for annotation verification of existing noisy sememe knowledge bases and annotation suggestion of new words and phrases.",
"title": ""
},
{
"docid": "681641e2593cad85fb1633d1027a9a4f",
"text": "Overview Aggressive driving is a major concern of the American public, ranking at or near the top of traffic safety issues in national surveys of motorists. However, the concept of aggressive driving is not well defined, and its overall impact on traffic safety has not been well quantified due to inadequacies and limitation of available data. This paper reviews published scientific literature on aggressive driving; discusses various definitions of aggressive driving; cites several specific behaviors that are typically associated with aggressive driving; and summarizes past research on the individuals or groups most likely to behave aggressively. Since adequate data to precisely quantify the percentage of fatal crashes that involve aggressive driving do not exist, in this review, we have quantified the number of fatal crashes in which one or more driver actions typically associated with aggressive driving were reported. We found these actions were reported in 56 percent of fatal crashes from 2003 through 2007, with excessive speed being the number one factor. Ideally, an estimate of the prevalence of aggressive driving would include only instances in which such actions were performed intentionally; however, available data on motor vehicle crashes do not contain such information, thus it is important to recognize that this 56 percent may to some degree overestimate the contribution of aggressive driving to fatal crashes. On the other hand, it is likely that aggressive driving contributes to at least some crashes in which it is not reported due to lack of evidence. Despite the clear limitations associated with our attempt to estimate the contribution of potentially-aggressive driver actions to fatal crashes, it is clear that aggressive driving poses a serious traffic safety threat. In addition, our review further indicated that the \" Do as I say, not as I do \" culture, previously reported in the Foundation's Traffic Safety Culture Index, very much applies to aggressive driving.",
"title": ""
},
{
"docid": "237437eae6a6154fb3b32c4c6c1fed07",
"text": "Ontology is playing an increasingly important role in knowledge management and the Semantic Web. This study presents a novel episode-based ontology construction mechanism to extract domain ontology from unstructured text documents. Additionally, fuzzy numbers for conceptual similarity computing are presented for concept clustering and taxonomic relation definitions. Moreover, concept attributes and operations can be extracted from episodes to construct a domain ontology, while non-taxonomic relations can be generated from episodes. The fuzzy inference mechanism is also applied to obtain new instances for ontology learning. Experimental results show that the proposed approach can effectively construct a Chinese domain ontology from unstructured text documents. 2006 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "462afb864b255f94deefb661174a598b",
"text": "Due to the heterogeneous and resource-constrained characters of Internet of Things (IoT), how to guarantee ubiquitous network connectivity is challenging. Although LTE cellular technology is the most promising solution to provide network connectivity in IoTs, information diffusion by cellular network not only occupies its saturating bandwidth, but also costs additional fees. Recently, NarrowBand-IoT (NB-IoT), introduced by 3GPP, is designed for low-power massive devices, which intends to refarm wireless spectrum and increase network coverage. For the sake of providing high link connectivity and capacity, we stimulate effective cooperations among user equipments (UEs), and propose a social-aware group formation framework to allocate resource blocks (RBs) effectively following an in-band NB-IoT solution. Specifically, we first introduce a social-aware multihop device-to-device (D2D) communication scheme to upload information toward the eNodeB within an LTE, so that a logical cooperative D2D topology can be established. Then, we formulate the D2D group formation as a scheduling optimization problem for RB allocation, which selects the feasible partition for the UEs by jointly considering relay method selection and spectrum reuse for NB-IoTs. Since the formulated optimization problem has a high computational complexity, we design a novel heuristic with a comprehensive consideration of power control and relay selection. Performance evaluations based on synthetic and real trace simulations manifest that the presented method can significantly increase link connectivity, link capacity, network throughput, and energy efficiency comparing with the existing solutions.",
"title": ""
},
{
"docid": "3440de9ea0f76ba39949edcb5e2a9b54",
"text": "This document is not intended to create, does not create, and may not be relied upon to create any rights, substantive or procedural, enforceable by law by any party in any matter civil or criminal. Findings and conclusions of the research reported here are those of the authors and do not necessarily reflect the official position or policies of the U.S. Department of Justice. The products, manufacturers, and organizations discussed in this document are presented for informational purposes only and do not constitute product approval or endorsement by the Much of crime mapping is devoted to detecting high-crime-density areas known as hot spots. Hot spot analysis helps police identify high-crime areas, types of crime being committed, and the best way to respond. This report discusses hot spot analysis techniques and software and identifies when to use each one. The visual display of a crime pattern on a map should be consistent with the type of hot spot and possible police action. For example, when hot spots are at specific addresses, a dot map is more appropriate than an area map, which would be too imprecise. In this report, chapters progress in sophis tication. Chapter 1 is for novices to crime mapping. Chapter 2 is more advanced, and chapter 3 is for highly experienced analysts. The report can be used as a com panion to another crime mapping report ■ Identifying hot spots requires multiple techniques; no single method is suffi cient to analyze all types of crime. ■ Current mapping technologies have sig nificantly improved the ability of crime analysts and researchers to understand crime patterns and victimization. ■ Crime hot spot maps can most effective ly guide police action when production of the maps is guided by crime theories (place, victim, street, or neighborhood).",
"title": ""
},
{
"docid": "e4e97569f53ddde763f4f28559c96ba6",
"text": "With a goal of understanding what drives generalization in deep networks, we consider several recently suggested explanations, including norm-based control, sharpness and robustness. We study how these measures can ensure generalization, highlighting the importance of scale normalization, and making a connection between sharpness and PAC-Bayes theory. We then investigate how well the measures explain different observed phenomena.",
"title": ""
},
{
"docid": "5f4e761af11ace5a4d6819431893a605",
"text": "The high power density converter is required due to the strict demands of volume and weight in more electric aircraft, which makes SiC extremely attractive for this application. In this work, a prototype of 50 kW SiC high power density converter with the topology of two-level three-phase voltage source inverter is demonstrated. This converter is driven at high switching speed based on the optimization in switching characterization. It operates at a switching frequency up to 100 kHz and a low dead time of 250 ns. And the converter efficiency is measured to be 99% at 40 kHz and 97.8% at 100 kHz.",
"title": ""
},
{
"docid": "6cf4315ecce8a06d9354ca2f2684113c",
"text": "BACKGROUND\nNutritional supplementation may be used to treat muscle loss with aging (sarcopenia). However, if physical activity does not increase, the elderly tend to compensate for the increased energy delivered by the supplements with reduced food intake, which results in a calorie substitution rather than supplementation. Thus, an effective supplement should stimulate muscle anabolism more efficiently than food or common protein supplements. We have shown that balanced amino acids stimulate muscle protein anabolism in the elderly, but it is unknown whether all amino acids are necessary to achieve this effect.\n\n\nOBJECTIVE\nWe assessed whether nonessential amino acids are required in a nutritional supplement to stimulate muscle protein anabolism in the elderly.\n\n\nDESIGN\nWe compared the response of muscle protein metabolism to either 18 g essential amino acids (EAA group: n = 6, age 69 +/- 2 y; +/- SD) or 40 g balanced amino acids (18 g essential amino acids + 22 g nonessential amino acids, BAA group; n = 8, age 71 +/- 2 y) given orally in small boluses every 10 min for 3 h to healthy elderly volunteers. Muscle protein metabolism was measured in the basal state and during amino acid administration via L-[ring-(2)H(5)]phenylalanine infusion, femoral arterial and venous catheterization, and muscle biopsies.\n\n\nRESULTS\nPhenylalanine net balance (in nmol x min(-1). 100 mL leg volume(-1)) increased from the basal state (P < 0.01), with no differences between groups (BAA: from -16 +/- 5 to 16 +/- 4; EAA: from -18 +/- 5 to 14 +/- 13) because of an increase (P < 0.01) in muscle protein synthesis and no change in breakdown.\n\n\nCONCLUSION\nEssential amino acids are primarily responsible for the amino acid-induced stimulation of muscle protein anabolism in the elderly.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
}
] | scidocsrr |
c857af66e1ebadea18b3b07de5b0400a | A Parallel Method for Earth Mover's Distance | [
{
"docid": "872a79a47e6a4d83e7440ea5e7126dee",
"text": "We propose simple and extremely efficient methods for solving the Basis Pursuit problem min{‖u‖1 : Au = f, u ∈ R}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization and they give a very accurate solution after solving only a very small number of instances of the unconstrained problem min u∈Rn μ‖u‖1 + 1 2 ‖Au− f‖2, for given matrix A and vector fk. We show analytically that this iterative approach yields exact solutions in a finite number of steps, and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrix-vector operations involving A and A> can be computed by fast transforms. Utilizing a fast fixed-point continuation solver that is solely based on such operations for solving the above unconstrained sub-problem, we were able to solve huge instances of compressed sensing problems quickly on a standard PC.",
"title": ""
}
] | [
{
"docid": "ed530d8481bbfd81da4bdf5d611ad4a4",
"text": "Traumatic coma was produced in 45 monkeys by accelerating the head without impact in one of three directions. The duration of coma, degree of neurological impairment, and amount of diffuse axonal injury (DAI) in the brain were directly related to the amount of coronal head motion used. Coma of less than 15 minutes (concussion) occurred in 11 of 13 animals subjected to sagittal head motion, in 2 of 6 animals with oblique head motion, and in 2 of 26 animals with full lateral head motion. All 15 concussioned animals had good recovery, and none had DAI. Conversely, coma lasting more than 6 hours occurred in one of the sagittal or oblique injury groups but was present in 20 of the laterally injured animals, all of which were severely disabled afterward. All laterally injured animals had a degree of DAI similar to that found in severe human head injury. Coma lasting 16 minutes to 6 hours occurred in 2 of 13 of the sagittal group, 4 of 6 in the oblique group, and 4 of 26 in the lateral group, these animals had less neurological disability and less DAI than when coma lasted longer than 6 hours. These experimental findings duplicate the spectrum of traumatic coma seen in human beings and include axonal damage identical to that seen in sever head injury in humans. Since the amount of DAI was directly proportional to the severity of injury (duration of coma and quality of outcome), we conclude that axonal damage produced by coronal head acceleration is a major cause of prolonged traumatic coma and its sequelae.",
"title": ""
},
{
"docid": "84af7a01dc5486c800f1cf94832ac5a8",
"text": "A technique intended to increase the diversity order of bit-interleaved coded modulations (BICM) over non Gaussian channels is presented. It introduces simple modifications to the mapper and to the corresponding demapper. They consist of a constellation rotation coupled with signal space component interleaving. Iterative processing at the receiver side can provide additional improvement to the BICM performance. This method has been shown to perform well over fading channels with or without erasures. It has been adopted for the 4-, 16-, 64- and 256-QAM constellations considered in the DVB-T2 standard. Resulting gains can vary from 0.2 dB to several dBs depending on the order of the constellation, the coding rate and the channel model.",
"title": ""
},
{
"docid": "9d45323cd4550075d4c2569065ae583c",
"text": "Research on Offline Handwritten Signature Verification explored a large variety of handcrafted feature extractors, ranging from graphology, texture descriptors to interest points. In spite of advancements in the last decades, performance of such systems is still far from optimal when we test the systems against skilled forgeries - signature forgeries that target a particular individual. In previous research, we proposed a formulation of the problem to learn features from data (signature images) in a Writer-Independent format, using Deep Convolutional Neural Networks (CNNs), seeking to improve performance on the task. In this research, we push further the performance of such method, exploring a range of architectures, and obtaining a large improvement in state-of-the-art performance on the GPDS dataset, the largest publicly available dataset on the task. In the GPDS-160 dataset, we obtained an Equal Error Rate of 2.74%, compared to 6.97% in the best result published in literature (that used a combination of multiple classifiers). We also present a visual analysis of the feature space learned by the model, and an analysis of the errors made by the classifier. Our analysis shows that the model is very effective in separating signatures that have a different global appearance, while being particularly vulnerable to forgeries that very closely resemble genuine signatures, even if their line quality is bad, which is the case of slowly-traced forgeries.",
"title": ""
},
{
"docid": "17ba29c670e744d6e4f9e93ceb109410",
"text": "This paper presents a novel online video recommendation system called VideoReach, which alleviates users' efforts on finding the most relevant videos according to current viewings without a sufficient collection of user profiles as required in traditional recommenders. In this system, video recommendation is formulated as finding a list of relevant videos in terms of multimodal relevance (i.e. textual, visual, and aural relevance) and user click-through. Since different videos have different intra-weights of relevance within an individual modality and inter-weights among different modalities, we adopt relevance feedback to automatically find optimal weights by user click-though, as well as an attention fusion function to fuse multimodal relevance. We use 20 clips as the representative test videos, which are searched by top 10 queries from more than 13k online videos, and report superior performance compared with an existing video site.",
"title": ""
},
{
"docid": "e96c9bdd3f5e9710f7264cbbe02738a7",
"text": "25 years ago, Lenstra, Lenstra and Lovász presented their c el brated LLL lattice reduction algorithm. Among the various applicatio ns of the LLL algorithm is a method due to Coppersmith for finding small roots of polyn mial equations. We give a survey of the applications of this root finding metho d t the problem of inverting the RSA function and the factorization problem. A s we will see, most of the results are of a dual nature, they can either be interpret ed as cryptanalytic results or as hardness/security results.",
"title": ""
},
{
"docid": "640f9ca0bec934786b49f7217e65780b",
"text": "Social Networking has become today’s lifestyle and anyone can easily receive information about everyone in the world. It is very useful if a personal identity can be obtained from the mobile device and also connected to social networking. Therefore, we proposed a face recognition system on mobile devices by combining cloud computing services. Our system is designed in the form of an application developed on Android mobile devices which utilized the Face.com API as an image data processor for cloud computing services. We also applied the Augmented Reality as an information viewer to the users. The result of testing shows that the system is able to recognize face samples with the average percentage of 85% with the total computation time for the face recognition system reached 7.45 seconds, and the average augmented reality translation time is 1.03 seconds to get someone’s information.",
"title": ""
},
{
"docid": "934bdd758626ec37241cffba8e2cbeb9",
"text": "The combination of GPS/INS provides an ideal navigation system of full capability of continuously outputting position, velocity, and attitude of the host platform. However, the accuracy of INS degrades with time when GPS signals are blocked in environments such as tunnels, dense urban canyons and indoors. To dampen down the error growth, the INS sensor errors should be properly estimated and compensated before the inertial data are involved in the navigation computation. Therefore appropriate modelling of the INS sensor errors is a necessity. Allan Variance (AV) is a simple and efficient method for verifying and modelling these errors by representing the root mean square (RMS) random drift error as a function of averaging time. The AV can be used to determine the characteristics of different random processes. This paper applies the AV to analyse and model different types of random errors residing in the measurements of MEMS inertial sensors. The derived error model will be further applied to a low-cost GPS/MEMS-INS system once the correctness of the model is verified. The paper gives the detail of the AV analysis as well as presents the test results.",
"title": ""
},
{
"docid": "f670bd1ad43f256d5f02039ab200e1e8",
"text": "This article addresses the performance of distributed database systems. Specifically, we present an algorithm for dynamic replication of an object in distributed systems. The algorithm is adaptive in the sence that it changes the replication scheme of the object i.e., the set of processors at which the object inreplicated) as changes occur in the read-write patern of the object (i.e., the number of reads and writes issued by each processor). The algorithm continuously moves the replication scheme towards an optimal one. We show that the algorithm can be combined with the concurrency control and recovery mechanisms of ta distributed database management system. The performance of the algorithm is analyzed theoretically and experimentally. On the way we provide a lower bound on the performance of any dynamic replication algorith.",
"title": ""
},
{
"docid": "45b90a55678a022f6c3f128d0dc7d1bf",
"text": "Finding community structures in online social networks is an important methodology for understanding the internal organization of users and actions. Most previous studies have focused on structural properties to detect communities. They do not analyze the information gathered from the posting activities of members of social networks, nor do they consider overlapping communities. To tackle these two drawbacks, a new overlapping community detection method involving social activities and semantic analysis is proposed. This work applies a fuzzy membership to detect overlapping communities with different extent and run semantic analysis to include information contained in posts. The available resource description format contributes to research in social networks. Based on this new understanding of social networks, this approach can be adopted for large online social networks and for social portals, such as forums, that are not based on network topology. The efficiency and feasibility of this method is verified by the available experimental analysis. The results obtained by the tests on real networks indicate that the proposed approach can be effective in discovering labelled and overlapping communities with a high amount of modularity. This approach is fast enough to process very large and dense social networks. 6",
"title": ""
},
{
"docid": "b7521521277f944a9532dc4435a2bda7",
"text": "The NDN project investigates Jacobson's proposed evolution from today's host-centric network architecture (IP) to a data-centric network architecture (NDN). This conceptually simple shift has far-reaching implications in how we design, develop, deploy and use networks and applications. The NDN design and development has attracted significant attention from the networking community. To facilitate broader participation in addressing NDN research and development challenges, this tutorial will describe the vision of this new architecture and its basic components and operations.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "e7686824a9449bf793554fcf78b66c0e",
"text": "In this paper, tension propagation analysis of a newly designed multi-DOF robotic platform for single-port access surgery (SPS) is presented. The analysis is based on instantaneous kinematics of the proposed 6-DOF surgical instrument, and provides the decision criteria for estimating the payload of a surgical instrument according to its pose changes and specifications of a driving-wire. Also, the wire-tension and the number of reduction ratio to manage such a payload can be estimated, quantitatively. The analysis begins with derivation of the power transmission efficiency through wire-interfaces from each instrument joint to an actuator. Based on the energy conservation law and the capstan equation, we modeled the degradation of power transmission efficiency due to 1) the reducer called wire-reduction mechanism, 2) bending of proximal instrument joints, and 3) bending of hyper-redundant guide tube. Based on the analysis, the tension of driving-wires was computed according to various manipulation poses and loading conditions. In our experiment, a newly designed surgical instrument successfully managed the external load of 1kgf, which was applied to the end effector of a surgical manipulator.",
"title": ""
},
{
"docid": "c78ebe9d42163142379557068b652a9c",
"text": "A tumor is a mass of tissue that's formed by an accumulation of abnormal cells. Normally, the cells in your body age, die, and are replaced by new cells. With cancer and other tumors, something disrupts this cycle. Tumor cells grow, even though the body does not need them, and unlike normal old cells, they don't die. As this process goes on, the tumor continues to grow as more and more cells are added to the mass. Image processing is an active research area in which medical image processing is a highly challenging field. Brain tumor analysis is done by doctors but its grading gives different conclusions which may vary from one doctor to another. In this project, it provides a foundation of segmentation and edge detection, as the first step towards brain tumor grading. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. There are dissimilar types of algorithm were developed for brain tumor detection. Comparing to the other algorithms the performance of fuzzy c-means plays a major role. The patient's stage is determined by this process, whether it can be cured with medicine or not. Also we study difficulty to detect Mild traumatic brain injury (mTBI) the current tools are qualitative, which can lead to poor diagnosis and treatment and to overcome these difficulties, an algorithm is proposed that takes advantage of subject information and texture information from MR images. A contextual model is developed to simulate the progression of the disease using multiple inputs, such as the time post injury and the location of injury. Textural features are used along with feature selection for a single MR modality.",
"title": ""
},
{
"docid": "9530749d15f1f3493f920b84e6e8cebd",
"text": "The view that humans comprise only two types of beings, women and men, a framework that is sometimes referred to as the \"gender binary,\" played a profound role in shaping the history of psychological science. In recent years, serious challenges to the gender binary have arisen from both academic research and social activism. This review describes 5 sets of empirical findings, spanning multiple disciplines, that fundamentally undermine the gender binary. These sources of evidence include neuroscience findings that refute sexual dimorphism of the human brain; behavioral neuroendocrinology findings that challenge the notion of genetically fixed, nonoverlapping, sexually dimorphic hormonal systems; psychological findings that highlight the similarities between men and women; psychological research on transgender and nonbinary individuals' identities and experiences; and developmental research suggesting that the tendency to view gender/sex as a meaningful, binary category is culturally determined and malleable. Costs associated with reliance on the gender binary and recommendations for future research, as well as clinical practice, are outlined. (PsycINFO Database Record",
"title": ""
},
{
"docid": "8c679f94e31dc89787ccff8e79e624b5",
"text": "This paper presents a radar sensor package specifically developed for wide-coverage sounding and imaging of polar ice sheets from a variety of aircraft. Our instruments address the need for a reliable remote sensing solution well-suited for extensive surveys at low and high altitudes and capable of making measurements with fine spatial and temporal resolution. The sensor package that we are presenting consists of four primary instruments and ancillary systems with all the associated antennas integrated into the aircraft to maintain aerodynamic performance. The instruments operate simultaneously over different frequency bands within the 160 MHz-18 GHz range. The sensor package has allowed us to sound the most challenging areas of the polar ice sheets, ice sheet margins, and outlet glaciers; to map near-surface internal layers with fine resolution; and to detect the snow-air and snow-ice interfaces of snow cover over sea ice to generate estimates of snow thickness. In this paper, we provide a succinct description of each radar and associated antenna structures and present sample results to document their performance. We also give a brief overview of our field measurement programs and demonstrate the unique capability of the sensor package to perform multifrequency coincidental measurements from a single airborne platform. Finally, we illustrate the relevance of using multispectral radar data as a tool to characterize the entire ice column and to reveal important subglacial features.",
"title": ""
},
{
"docid": "99cb4f69fb7b6ff16c9bffacd7a42f4d",
"text": "Single cell segmentation is critical and challenging in live cell imaging data analysis. Traditional image processing methods and tools require time-consuming and labor-intensive efforts of manually fine-tuning parameters. Slight variations of image setting may lead to poor segmentation results. Recent development of deep convolutional neural networks(CNN) provides a potentially efficient, general and robust method for segmentation. Most existing CNN-based methods treat segmentation as a pixel-wise classification problem. However, three unique problems of cell images adversely affect segmentation accuracy: lack of established training dataset, few pixels on cell boundaries, and ubiquitous blurry features. The problem becomes especially severe with densely packed cells, where a pixel-wise classification method tends to identify two neighboring cells with blurry shared boundary as one cell, leading to poor cell count accuracy and affecting subsequent analysis. Here we developed a different learning strategy that combines strengths of CNN and watershed algorithm. The method first trains a CNN to learn Euclidean distance transform of binary masks corresponding to the input images. Then another CNN is trained to detect individual cells in the Euclidean distance transform. In the third step, the watershed algorithm takes the outputs from the previous steps as inputs and performs the segmentation. We tested the combined method and various forms of the pixel-wise classification algorithm on segmenting fluorescence and transmitted light images. The new method achieves similar pixel accuracy but significant higher cell count accuracy than pixel-wise classification methods do, and the advantage is most obvious when applying on noisy images of densely packed cells.",
"title": ""
},
{
"docid": "ef9650746ac9ab803b2a3bbdd5493fee",
"text": "This paper addresses the problem of establishing correspondences between two sets of visual features using higher order constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to state-of-the-art algorithms on both synthetic and real data.",
"title": ""
},
{
"docid": "ab572c22a75656c19e50b311eb4985ec",
"text": "With the increasingly complex electromagnetic environment of communication, as well as the gradually increased radar signal types, how to effectively identify the types of radar signals at low SNR becomes a hot topic. A radar signal recognition algorithm based on entropy features, which describes the distribution characteristics for different types of radar signals by extracting Shannon entropy, Singular spectrum Shannon entropy and Singular spectrum index entropy features, was proposed to achieve the purpose of signal identification. Simulation results show that, the algorithm based on entropies has good anti-noise performance, and it can still describe the characteristics of signals well even at low SNR, which can achieve the purpose of identification and classification for different radar signals.",
"title": ""
},
{
"docid": "1de46f2eee8db2fad444faa6fbba4d1c",
"text": "Hyunsook Yoon Dongguk University, Korea This paper reports on a qualitative study that investigated the changes in students’ writing process associated with corpus use over an extended period of time. The primary purpose of this study was to examine how corpus technology affects students’ development of competence as second language (L2) writers. The research was mainly based on case studies with six L2 writers in an English for Academic Purposes writing course. The findings revealed that corpus use not only had an immediate effect by helping the students solve immediate writing/language problems, but also promoted their perceptions of lexicogrammar and language awareness. Once the corpus approach was introduced to the writing process, the students assumed more responsibility for their writing and became more independent writers, and their confidence in writing increased. This study identified a wide variety of individual experiences and learning contexts that were involved in deciding the levels of the students’ willingness and success in using corpora. This paper also discusses the distinctive contributions of general corpora to English for Academic Purposes and the importance of lexical and grammatical aspects in L2 writing pedagogy.",
"title": ""
},
{
"docid": "cb2f5ac9292df37860b02313293d2f04",
"text": "How can web services that depend on user generated content discern fake social engagement activities by spammers from legitimate ones? In this paper, we focus on the social site of YouTube and the problem of identifying bad actors posting inorganic contents and inflating the count of social engagement metrics. We propose an effective method, Leas (Local Expansion at Scale), and show how the fake engagement activities on YouTube can be tracked over time by analyzing the temporal graph based on the engagement behavior pattern between users and YouTube videos. With the domain knowledge of spammer seeds, we formulate and tackle the problem in a semi-supervised manner — with the objective of searching for individuals that have similar pattern of behavior as the known seeds — based on a graph diffusion process via local spectral subspace. We offer a fast, scalable MapReduce deployment adapted from the localized spectral clustering algorithm. We demonstrate the effectiveness of our deployment at Google by achieving a manual review accuracy of 98% on YouTube Comments graph in practice. Comparing with the state-of-the-art algorithm CopyCatch, Leas achieves 10 times faster running time on average. Leas is now actively in use at Google, searching for daily deceptive practices on YouTube’s engagement graph spanning over a",
"title": ""
}
] | scidocsrr |
9705b47395ef0884d8739af8b47e69b1 | Tell me a story--a conceptual exploration of storytelling in healthcare education. | [
{
"docid": "4ade01af5fd850722fd690a5d8f938f4",
"text": "IT may appear blasphemous to paraphrase the title of the classic article of Vannevar Bush but it may be a mitigating factor that it is done to pay tribute to another legendary scientist, Eugene Garfield. His ideas of citationbased searching, resource discovery and quantitative evaluation of publications serve as the basis for many of the most innovative and powerful online information services these days. Bush 60 years ago contemplated – among many other things – an information workstation, the Memex. A researcher would use it to annotate, organize, link, store, and retrieve microfilmed documents. He is acknowledged today as the forefather of the hypertext system, which in turn, is the backbone of the Internet. He outlined his thoughts in an essay published in the Atlantic Monthly. Maybe because of using a nonscientific outlet the paper was hardly quoted and cited in scholarly and professional journals for 30 years. Understandably, the Atlantic Monthly was not covered by the few, specialized abstracting and indexing databases of scientific literature. Such general interest magazines are not source journals in either the Web of Science (WoS), or Scopus databases. However, records for items which cite the ‘As We May Think’ article of Bush (also known as the ‘Memex’ paper) are listed with appropriate bibliographic information. Google Scholar (G-S) lists the records for the Memex paper and many of its citing papers. It is a rather confusing list with many dead links or otherwise dysfunctional links, and a hodge-podge of information related to Bush. It is quite telling that (based on data from the 1945– 2005 edition of WoS) the article of Bush gathered almost 90% of all its 712 citations in WoS between 1975 and 2005, peaking in 1999 with 45 citations in that year alone. Undoubtedly, this proportion is likely to be distorted because far fewer source articles from far fewer journals were processed by the Institute for Scientific Information for 1945–1974 than for 1975–2005. Scopus identifies 267 papers citing the Bush article. The main reason for the discrepancy is that Scopus includes cited references only from 1995 onward, while WoS does so from 1945. Bush’s impatience with the limitations imposed by the traditional classification and indexing tools and practices of the time is palpable. It is worth to quote it as a reminder. Interestingly, he brings up the terms ‘web of trails’ and ‘association of thoughts’ which establishes the link between him and Garfield.",
"title": ""
}
] | [
{
"docid": "4da3f01ac76da39be45ab39c1e46bcf0",
"text": "Depth cameras are low-cost, plug & play solution to generate point cloud. 3D depth camera yields depth images which do not convey the actual distance. A 3D camera driver does not support raw depth data output, these are usually filtered and calibrated as per the sensor specifications and hence a method is required to map every pixel back to its original point in 3D space. This paper demonstrates the method to triangulate a pixel from the 2D depth image back to its actual position in 3D space. Further this method illustrates the independence of this mapping operation, which facilitates parallel computing. Triangulation method and ratios between the pixel positions and camera parameters are used to estimate the true position in 3D space. The algorithm performance can be increased by 70% by the usage of TPL libraries. This performance differs from processor to processor",
"title": ""
},
{
"docid": "9d5d667c6d621bd90a688c993065f5df",
"text": "Creative individuals increasingly rely on online crowdfunding platforms to crowdsource funding for new ventures. For novice crowdfunding project creators, however, there are few resources to turn to for assistance in the planning of crowdfunding projects. We are building a tool for novice project creators to get feedback on their project designs. One component of this tool is a comparison to existing projects. As such, we have applied a variety of machine learning classifiers to learn the concept of a successful online crowdfunding project at the time of project launch. Currently our classifier can predict with roughly 68% accuracy, whether a project will be successful or not. The classification results will eventually power a prediction segment of the proposed feedback tool. Future work involves turning the results of the machine learning algorithms into human-readable content and integrating this content into the feedback tool.",
"title": ""
},
{
"docid": "80e26e5bcbadf034896fcd206cd16099",
"text": "This paper focuses on localization that serves as a smart service. Among the primary services provided by Internet of Things (IoT), localization offers automatically discoverable services. Knowledge relating to an object's position, especially when combined with other information collected from sensors and shared with other smart objects, allows us to develop intelligent systems to fast respond to changes in an environment. Today, wireless sensor networks (WSNs) have become a critical technology for various kinds of smart environments through which different kinds of devices can connect with each other coinciding with the principles of IoT. Among various WSN techniques designed for positioning an unknown node, the trilateration approach based on the received signal strength is the most suitable for localization due to its implementation simplicity and low hardware requirement. However, its performance is susceptible to external factors, such as the number of people present in a room, the shape and dimension of an environment, and the positions of objects and devices. To improve the localization accuracy of trilateration, we develop a novel distributed localization algorithm with a dynamic-circle-expanding mechanism capable of more accurately establishing the geometric relationship between an unknown node and reference nodes. The results of real world experiments and computer simulation show that the average error of position estimation is 0.67 and 0.225 m in the best cases, respectively. This suggests that the proposed localization algorithm outperforms other existing methods.",
"title": ""
},
{
"docid": "d0ad2b6a36dce62f650323cb5dd40bc9",
"text": "If two hospitals are providing identical services in all respects, except for the brand name, why are customers willing to pay more for one hospital than the other? That is, the brand name is not just a name, but a name that contains value (brand equity). Brand equity is the value that the brand name endows to the product, such that consumers are willing to pay a premium price for products with the particular brand name. Accordingly, a company needs to manage its brand carefully so that its brand equity does not depreciate. Although measuring brand equity is important, managers have no brand equity index that is psychometrically robust and parsimonious enough for practice. Indeed, index construction is quite different from conventional scale development. Moreover, researchers might still be unaware of the potential appropriateness of formative indicators for operationalizing particular constructs. Toward this end, drawing on the brand equity literature and following the index construction procedure, this study creates a brand equity index for a hospital. The results reveal a parsimonious five-indicator brand equity index that can adequately capture the full domain of brand equity. This study also illustrates the differences between index construction and scale development.",
"title": ""
},
{
"docid": "9a522060a52474850ff328cef5ea4121",
"text": "Mild cognitive impairment (MCI) is the prodromal stage of Alzheimer's disease (AD). Identifying MCI subjects who are at high risk of converting to AD is crucial for effective treatments. In this study, a deep learning approach based on convolutional neural networks (CNN), is designed to accurately predict MCI-to-AD conversion with magnetic resonance imaging (MRI) data. First, MRI images are prepared with age-correction and other processing. Second, local patches, which are assembled into 2.5 dimensions, are extracted from these images. Then, the patches from AD and normal controls (NC) are used to train a CNN to identify deep learning features of MCI subjects. After that, structural brain image features are mined with FreeSurfer to assist CNN. Finally, both types of features are fed into an extreme learning machine classifier to predict the AD conversion. The proposed approach is validated on the standardized MRI datasets from the Alzheimer's Disease Neuroimaging Initiative (ADNI) project. This approach achieves an accuracy of 79.9% and an area under the receiver operating characteristic curve (AUC) of 86.1% in leave-one-out cross validations. Compared with other state-of-the-art methods, the proposed one outperforms others with higher accuracy and AUC, while keeping a good balance between the sensitivity and specificity. Results demonstrate great potentials of the proposed CNN-based approach for the prediction of MCI-to-AD conversion with solely MRI data. Age correction and assisted structural brain image features can boost the prediction performance of CNN.",
"title": ""
},
{
"docid": "44bee5e310c91c778e874d347c64bc18",
"text": "In this paper, we consider a deterministic global optimization algorithm for solving a general linear sum of ratios (LFP). First, an equivalent optimization problem (LFP1) of LFP is derived by exploiting the characteristics of the constraints of LFP. By a new linearizing method the linearization relaxation function of the objective function of LFP1 is derived, then the linear relaxation programming (RLP) of LFP1 is constructed and the proposed branch and bound algorithm is convergent to the global minimum through the successive refinement of the linear relaxation of the feasible region of the objection function and the solutions of a series of RLP. And finally the numerical experiments are given to illustrate the feasibility of the proposed algorithm. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "7c3c06529ae52055de668cbefce39c5f",
"text": "Context-aware recommendation algorithms focus on refining recommendations by considering additional information, available to the system. This topic has gained a lot of attention recently. Among others, several factorization methods were proposed to solve the problem, although most of them assume explicit feedback which strongly limits their real-world applicability. While these algorithms apply various loss functions and optimization strategies, the preference modeling under context is less explored due to the lack of tools allowing for easy experimentation with various models. As context dimensions are introduced beyond users and items, the space of possible preference models and the importance of proper modeling largely increases. In this paper we propose a general factorization framework (GFF), a single flexible algorithm that takes the preference model as an input and computes latent feature matrices for the input dimensions. GFF allows us to easily experiment with various linear models on any context-aware recommendation task, be it explicit or implicit feedback based. The scaling properties makes it usable under real life circumstances as well. We demonstrate the framework’s potential by exploring various preference models on a 4-dimensional context-aware problem with contexts that are available for almost any real life datasets. We show in our experiments—performed on five real life, implicit feedback datasets—that proper preference modelling significantly increases recommendation accuracy, and previously unused models outperform the traditional ones. Novel models in GFF also outperform state-of-the-art factorization algorithms. We also extend the method to be fully compliant to the Multidimensional Dataspace Model, one of the most extensive data models of context-enriched data. Extended GFF allows the seamless incorporation of information into the factorization framework beyond context, like item metadata, social networks, session information, etc. Preliminary experiments show great potential of this capability.",
"title": ""
},
{
"docid": "c59cae78ce3482450776755b9d9d5199",
"text": "Traditional information systems return answers after a user submits a complete query. Users often feel “left in the dark” when they have limited knowledge about the underlying data and have to use a try-and-see approach for finding information. A recent trend of supporting autocomplete in these systems is a first step toward solving this problem. In this paper, we study a new information-access paradigm, called “type-ahead search” in which the system searches the underlying data “on the fly” as the user types in query keywords. It extends autocomplete interfaces by allowing keywords to appear at different places in the underlying data. This framework allows users to explore data as they type, even in the presence of minor errors. We study research challenges in this framework for large amounts of data. Since each keystroke of the user could invoke a query on the backend, we need efficient algorithms to process each query within milliseconds. We develop various incremental-search algorithms for both single-keyword queries and multi-keyword queries, using previously computed and cached results in order to achieve a high interactive speed. We develop novel techniques to support fuzzy search by allowing mismatches between query keywords and answers. We have deployed several real prototypes using these techniques. One of them has been deployed to support type-ahead search on the UC Irvine people directory, which has been used regularly and well received by users due to its friendly interface and high efficiency.",
"title": ""
},
{
"docid": "07a4f79dbe16be70877724b142013072",
"text": "Safety planning in the construction industry is generally done separately from the project execution planning. This separation creates difficulties for safety engineers to analyze what, when, why and where safety measures are needed for preventing accidents. Lack of information and integration of available data (safety plan, project schedule, 2D project drawings) during the planning stage often results in scheduling work activities with overlapping space needs that then can create hazardous conditions, for example, work above other crew. These space requirements are time dependent and often neglected due to the manual effort that is required to handle the data. Representation of project-specific activity space requirements in 4D models hardly happen along with schedule and work break-down structure. Even with full cooperation of all related stakeholders, current safety planning and execution still largely depends on manual observation and past experiences. The traditional manual observation is inefficient, error-prone, and the observed result can be easily effected by subjective judgments. This paper will demonstrate the development of an automated safety code checking tool for Building Information Modeling (BIM), work breakdown structure, and project schedules in conjunction with safety criteria to reduce the potential for accidents on construction projects. The automated safety compliance rule checker code builds on existing applications for building code compliance checking, structural analysis, and constructability analysis etc. and also the advances in 4D simulations for scheduling. Preliminary results demonstrate a computer-based automated tool can assist in safety planning and execution of projects on a day to day basis.",
"title": ""
},
{
"docid": "70374d2cbf730fab13c3e126359b59e8",
"text": "We define a new distance measure the resistor-average distance between two probability distributions that is closely related to the Kullback-Leibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistor-average distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to well-known distance measures reveals a new way to depict how commonly used distance measures relate to each other.",
"title": ""
},
{
"docid": "d3b0957b31f47620c0fa8e65a1cc086a",
"text": "In this paper, we propose series of algorithms for detecting change points in time-series data based on subspace identification, meaning a geometric approach for estimating linear state-space models behind time-series data. Our algorithms are derived from the principle that the subspace spanned by the columns of an observability matrix and the one spanned by the subsequences of time-series data are approximately equivalent. In this paper, we derive a batch-type algorithm applicable to ordinary time-series data, i.e. consisting of only output series, and then introduce the online version of the algorithm and the extension to be available with input-output time-series data. We illustrate the effectiveness of our algorithms with comparative experiments using some artificial and real datasets.",
"title": ""
},
{
"docid": "aa80e0ad489c03ec94a1835d6d4907a3",
"text": "Cloud computing is a term coined to a network that offers incredible processing power, a wide array of storage space and unbelievable speed of computation. Social media channels, corporate structures and individual consumers are all switching to the magnificent world of cloud computing. The flip side to this coin is that with cloud storage emerges the security issues of confidentiality, data integrity and data availability. Since the “cloud” is a mere collection of tangible super computers spread across the world, authentication and authorization for data access is more than a necessity. Our work attempts to overcome these security threats. The proposed methodology suggests the encryption of the files to be uploaded on the cloud. The integrity and confidentiality of the data uploaded by the user is ensured doubly by not only encrypting it but also providing access to the data only on successful authentication. KeywordsCloud computing, security, encryption, password based AES algorithm",
"title": ""
},
{
"docid": "2126c47fe320af2d908ec01a426419ce",
"text": "Stretching has long been used in many physical activities to increase range of motion (ROM) around a joint. Stretching also has other acute effects on the neuromuscular system. For instance, significant reductions in maximal voluntary strength, muscle power or evoked contractile properties have been recorded immediately after a single bout of static stretching, raising interest in other stretching modalities. Thus, the effects of dynamic stretching on subsequent muscular performance have been questioned. This review aimed to investigate performance and physiological alterations following dynamic stretching. There is a substantial amount of evidence pointing out the positive effects on ROM and subsequent performance (force, power, sprint and jump). The larger ROM would be mainly attributable to reduced stiffness of the muscle-tendon unit, while the improved muscular performance to temperature and potentiation-related mechanisms caused by the voluntary contraction associated with dynamic stretching. Therefore, if the goal of a warm-up is to increase joint ROM and to enhance muscle force and/or power, dynamic stretching seems to be a suitable alternative to static stretching. Nevertheless, numerous studies reporting no alteration or even performance impairment have highlighted possible mitigating factors (such as stretch duration, amplitude or velocity). Accordingly, ballistic stretching, a form of dynamic stretching with greater velocities, would be less beneficial than controlled dynamic stretching. Notwithstanding, the literature shows that inconsistent description of stretch procedures has been an important deterrent to reaching a clear consensus. In this review, we highlight the need for future studies reporting homogeneous, clearly described stretching protocols, and propose a clarified stretching terminology and methodology.",
"title": ""
},
{
"docid": "9f6da52c8ea3ba605ecbed71e020d31a",
"text": "With the exponential growth of information being transmitted as a result of various networks, the issues related to providing security to transmit information have considerably increased. Mathematical models were proposed to consolidate the data being transmitted and to protect the same from being tampered with. Work was carried out on the application of 1D and 2D cellular automata (CA) rules for data encryption and decryption in cryptography. A lot more work needs to be done to develop suitable algorithms and 3D CA rules for encryption and description of 3D chaotic information systems. Suitable coding for the algorithms are developed and the results are evaluated for the performance of the algorithms. Here 3D cellular automata encryption and decryption algorithms are used to provide security of data by arranging plain texts and images into layers of cellular automata by using the cellular automata neighbourhood system. This has resulted in highest order of security for transmitted data.",
"title": ""
},
{
"docid": "19067b3d0f951bad90c80688371532fc",
"text": "Research in Artificial Intelligence is breaking technology barriers every day. New algorithms and high performance computing are making things possible which we could only have imagined earlier. Though the enhancements in AI are making life easier for human beings day by day, there is constant fear that AI based systems will pose a threat to humanity. People in AI community have diverse set of opinions regarding the pros and cons of AI mimicking human behavior. Instead of worrying about AI advancements, we propose a novel idea of cognitive agents, including both human and machines, living together in a complex adaptive ecosystem, collaborating on human computation for producing essential social goods while promoting sustenance, survival and evolution of the agents’ life cycle. We highlight several research challenges and technology barriers in achieving this goal. We propose a governance mechanism around this ecosystem to ensure ethical behaviors of all cognitive agents. Along with a novel set of use-cases of Cogniculture , we discuss the road map ahead",
"title": ""
},
{
"docid": "7e4c00d8f17166cbfb3bdac8d5e5ad09",
"text": "Twitter is now used to distribute substantive content such as breaking news, increasing the importance of assessing the credibility of tweets. As users increasingly access tweets through search, they have less information on which to base credibility judgments as compared to consuming content from direct social network connections. We present survey results regarding users' perceptions of tweet credibility. We find a disparity between features users consider relevant to credibility assessment and those currently revealed by search engines. We then conducted two experiments in which we systematically manipulated several features of tweets to assess their impact on credibility ratings. We show that users are poor judges of truthfulness based on content alone, and instead are influenced by heuristics such as user name when making credibility assessments. Based on these findings, we discuss strategies tweet authors can use to enhance their credibility with readers (and strategies astute readers should be aware of!). We propose design improvements for displaying social search results so as to better convey credibility.",
"title": ""
},
{
"docid": "59323291555a82ef99013bd4510b3020",
"text": "This paper aims to classify and analyze recent as well as classic image registration techniques. Image registration is the process of super imposing images of the same scene taken at different times, location and by different sensors. It is a key enabling technology in medical image analysis for integrating and analyzing information from various modalities. Basically image registration finds temporal correspondences between the set of images and uses transformation model to infer features from these correspondences.The approaches for image registration can beclassified according to their nature vizarea-based and feature-based and dimensionalityvizspatial domain and frequency domain. The procedure of image registration by intensity based model, spatial domain transform, Rigid transform and Non rigid transform based on the above mentioned classification has been performed and the eminence of image is measured by the three quality parameters such as SNR, PSNR and MSE. The techniques have been implemented and inferred thatthe non-rigid transform exhibit higher perceptual quality and offer visually sharper image than other techniques.Problematic issues of image registration techniques and outlook for the future research are discussed. This work may be one of the comprehensive reference sources for the researchers involved in image registration.",
"title": ""
},
{
"docid": "68dc61e0c6b33729f08cdd73e8e86096",
"text": "Many important data analysis applications present with severely imbalanced datasets with respect to the target variable. A typical example is medical image analysis, where positive samples are scarce, while performance is commonly estimated against the correct detection of these positive examples. We approach this challenge by formulating the problem as anomaly detection with generative models. We train a generative model without supervision on the ‘negative’ (common) datapoints and use this model to estimate the likelihood of unseen data. A successful model allows us to detect the ‘positive’ case as low likelihood datapoints. In this position paper, we present the use of state-of-the-art deep generative models (GAN and VAE) for the estimation of a likelihood of the data. Our results show that on the one hand both GANs and VAEs are able to separate the ‘positive’ and ‘negative’ samples in the MNIST case. On the other hand, for the NLST case, neither GANs nor VAEs were able to capture the complexity of the data and discriminate anomalies at the level that this task requires. These results show that even though there are a number of successes presented in the literature for using generative models in similar applications, there remain further challenges for broad successful implementation.",
"title": ""
},
{
"docid": "a6a364819f397a8e28ac0b19480253cc",
"text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.",
"title": ""
},
{
"docid": "a6f08476ea81c50a36497bd65137ca16",
"text": "In this paper we tackle the inversion of large-scale dense matrices via conventional matrix factorizations (LU, Cholesky, LDL ) and the Gauss-Jordan method on hybrid platforms consisting of a multi-core CPU and a many-core graphics processor (GPU). Specifically, we introduce the different matrix inversion algorithms using a unified framework based on the notation from the FLAME project; we develop hybrid implementations for those matrix operations underlying the algorithms, alternative to those in existing libraries for singleGPU systems; and we perform an extensive experimental study on a platform equipped with state-of-the-art general-purpose architectures from Intel and a “Fermi” GPU from NVIDIA that exposes the efficiency of the different inversion approaches. Our study and experimental results show the simplicity and performance advantage of the GJE-based inversion methods, and the difficulties associated with the symmetric indefinite case.",
"title": ""
}
] | scidocsrr |
08d3c2023248fc0fa4a853f3fd55733b | Measurement Issues in Galvanic Intrabody Communication: Influence of Experimental Setup | [
{
"docid": "ff0b13d3841913de36104e37cc893b26",
"text": "Modeling of intrabody communication (IBC) entails the understanding of the interaction between electromagnetic fields and living tissues. At the same time, an accurate model can provide practical hints toward the deployment of an efficient and secure communication channel for body sensor networks. In the literature, two main IBC coupling techniques have been proposed: galvanic and capacitive coupling. Nevertheless, models that are able to emulate both coupling approaches have not been reported so far. In this paper, a simple model based on a distributed parameter structure with the flexibility to adapt to both galvanic and capacitive coupling has been proposed. In addition, experimental results for both coupling methods were acquired by means of two harmonized measurement setups. The model simulations have been subsequently compared with the experimental data, not only to show their validity but also to revise the practical frequency operation range for both techniques. Finally, the model, along with the experimental results, has also allowed us to provide some practical rules to optimally tackle IBC design.",
"title": ""
},
{
"docid": "df1124c8b5b3295f09da347d19f152f6",
"text": "The signal transmission mechanism on the surface of the human body is studied for the application to body channel communication (BCC). From Maxwell's equations, the complete equation of electrical field on the human body is developed to obtain a general BCC model. The mechanism of BCC consists of three parts according to the operating frequencies and channel distances: the quasi-static near-field coupling part, the reactive induction-field radiation part, and the surface wave far-field propagation part. The general BCC model by means of the near-field and far-field approximation is developed to be valid in the frequency range from 100 kHz to 100 MHz and distance up to 1.3 m based on the measurements of the body channel characteristics. Finally, path loss characteristics of BCC are formulated for the design of BCC systems and many potential applications.",
"title": ""
},
{
"docid": "704611db1aea020103b093a2156cd94d",
"text": "With the growing number of wearable devices and applications, there is an increasing need for a flexible body channel communication (BCC) system that supports both scalable data rate and low power operation. In this paper, a highly flexible frequency-selective digital transmission (FSDT) transmitter that supports both data scalability and low power operation with the aid of two novel implementation methods is presented. In an FSDT system, data rate is limited by the number of Walsh spreading codes available for use in the optimal body channel band of 40-80 MHz. The first method overcomes this limitation by applying multi-level baseband coding scheme to a carrierless FSDT system to enhance the bandwidth efficiency and to support a data rate of 60 Mb/s within a 40-MHz bandwidth. The proposed multi-level coded FSDT system achieves six times higher data rate as compared to other BCC systems. The second novel implementation method lies in the use of harmonic frequencies of a Walsh encoded FSDT system that allows the BCC system to operate in the optimal channel bandwidth between 40-80 MHz with half the clock frequency. Halving the clock frequency results in a power consumption reduction of 32%. The transmitter was fabricated in a 65-nm CMOS process. It occupies a core area of 0.24 × 0.3 mm 2. When operating under a 60-Mb/s data-rate mode, the transmitter consumes 1.85 mW and it consumes only 1.26 mW when operating under a 5-Mb/s data-rate mode.",
"title": ""
}
] | [
{
"docid": "0da9197d2f6839d01560b46cbb1fbc8d",
"text": "Estimating the traversability of rough terrain is a critical task for an outdoor mobile robot. While classifying structured environment can be learned from large number of training data, it is an extremely difficult task to learn and estimate traversability of unstructured rough terrain. Moreover, in many cases information from a single sensor may not be sufficient for estimating traversability reliably in the absence of artificial landmarks such as lane markings or curbs. Our approach estimates traversability of the terrain and build a 2D probabilistic grid map online using 3D-LIDAR and camera. The combination of LIDAR and camera is favoured in many robotic application because they provide complementary information. Our approach assumes the data captured by these two sensors are independent and build separate traversability maps, each with information captured from one sensor. Traversability estimation with vision sensor autonomously collects training data and update classifier without human intervention as the vehicle traverse the terrain. Traversability estimation with 3D-LIDAR measures the slopes of the ground to predict the traversability. Two independently built probabilistic maps are fused using Bayes' rule to improve the detection performance. This is in contrast with other methods in which each sensor performs different tasks. We have implemented the algorithm on a UGV(Unmanned Ground Vehicle) and tested our approach on a rough terrain to evaluate the detection performance.",
"title": ""
},
{
"docid": "cb95831a960ae9ec2d1ea4279cfa6ac2",
"text": "In vivo fluorescence imaging suffers from suboptimal signal-to-noise ratio and shallow detection depth, which is caused by the strong tissue autofluorescence under constant external excitation and the scattering and absorption of short-wavelength light in tissues. Here we address these limitations by using a novel type of optical nanoprobes, photostimulable LiGa5O8:Cr(3+) near-infrared (NIR) persistent luminescence nanoparticles, which, with very-long-lasting NIR persistent luminescence and unique photo-stimulated persistent luminescence (PSPL) capability, allow optical imaging to be performed in an excitation-free and hence, autofluorescence-free manner. LiGa5O8:Cr(3+) nanoparticles pre-charged by ultraviolet light can be repeatedly (>20 times) stimulated in vivo, even in deep tissues, by short-illumination (~15 seconds) with a white light-emitting-diode flashlight, giving rise to multiple NIR PSPL that expands the tracking window from several hours to more than 10 days. Our studies reveal promising potential of these nanoprobes in cell tracking and tumor targeting, exhibiting exceptional sensitivity and penetration that far exceed those afforded by conventional fluorescence imaging.",
"title": ""
},
{
"docid": "faa8bb95a4b05bed78dbdfaec1cd147c",
"text": "This paper describes the SimBow system submitted at SemEval2017-Task3, for the question-question similarity subtask B. The proposed approach is a supervised combination of different unsupervised textual similarities. These textual similarities rely on the introduction of a relation matrix in the classical cosine similarity between bag-of-words, so as to get a softcosine that takes into account relations between words. According to the type of relation matrix embedded in the soft-cosine, semantic or lexical relations can be considered. Our system ranked first among the official submissions of subtask B.",
"title": ""
},
{
"docid": "e2c2d56a92aa66453804c552ad0892b9",
"text": "By analyzing the relationship of S-parameter between two-port differential and four-port single-ended networks, a method is found for measuring the S-parameter of a differential amplifier on wafer by using a normal two-port vector network analyzer. With this method, it should not especially purchase a four-port vector network analyzer. Furthermore, the method was also suitable for measuring S-parameter of any multi-port circuit by using two-ports measurement set.",
"title": ""
},
{
"docid": "75ed78f9a59ec978432f16fd4407df60",
"text": "The transition from user requirements to UML diagrams is a difficult task for the designer espec ially when he handles large texts expressing these needs. Modelin g class Diagram must be performed frequently, even during t he development of a simple application. This paper prop oses an approach to facilitate class diagram extraction from textual requirements using NLP techniques and domain ontolog y. Keywords-component; Class Diagram, Natural Language Processing, GATE, Domain ontology, requirements.",
"title": ""
},
{
"docid": "147b8f02031ba9bc8788600dc48301c9",
"text": "This paper gives an overview on different research activities on electronically steerable antennas at Ka-band within the framework of the SANTANA project. In addition, it gives an outlook on future objectives, namely the perspective of testing SANTANA technologies with the projected German research satellite “Heinrich Hertz”.",
"title": ""
},
{
"docid": "af983aa7ac103dd41dfd914af452758f",
"text": "The fast-growing nature of instant messaging applications usage on Android mobile devices brought about a proportional increase on the number of cyber-attack vectors that could be perpetrated on them. Android mobile phones store significant amount of information in the various memory partitions when Instant Messaging (IM) applications (WhatsApp, Skype, and Facebook) are executed on them. As a result of the enormous crimes committed using instant messaging applications, and the amount of electronic based traces of evidence that can be retrieved from the suspect’s device where an investigation could convict or refute a person in the court of law and as such, mobile phones have become a vulnerable ground for digital evidence mining. This paper aims at using forensic tools to extract and analyse left artefacts digital evidence from IM applications on Android phones using android studio as the virtual machine. Digital forensic investigation methodology by Bill Nelson was applied during this research. Some of the key results obtained showed how digital forensic evidence such as call logs, contacts numbers, sent/retrieved messages, and images can be mined from simulated android phones when running these applications. These artefacts can be used in the court of law as evidence during cybercrime investigation.",
"title": ""
},
{
"docid": "1ebb333d5a72c649cd7d7986f5bf6975",
"text": "\"Of what a strange nature is knowledge! It clings to the mind, when it has once seized on it, like a lichen on the rock,\" Abstract We describe a theoretical system intended to facilitate the use of knowledge In an understand ing system. The notion of script is introduced to account for knowledge about mundane situations. A program, SAM, is capable of using scripts to under stand. The notion of plans is introduced to ac count for general knowledge about novel situa tions. I. Preface In an attempt to provide theory where there have been mostly unrelated systems, Minsky (1974) recently described the as fitting into the notion of \"frames.\" Minsky at tempted to relate this work, in what is essentially language processing, to areas of vision research that conform to the same notion. Mlnsky's frames paper has created quite a stir in AI and some immediate spinoff research along the lines of developing frames manipulators (e.g. Bobrow, 1975; Winograd, 1975). We find that we agree with much of what Minsky said about frames and with his characterization of our own work. The frames idea is so general, however, that It does not lend itself to applications without further specialization. This paper is an attempt to devel op further the lines of thought set out in Schank (1975a) and Abelson (1973; 1975a). The ideas pre sented here can be viewed as a specialization of the frame idea. We shall refer to our central constructs as \"scripts.\" II. The Problem Researchers in natural language understanding have felt for some time that the eventual limit on the solution of our problem will be our ability to characterize world knowledge. Various researchers have approached world knowledge in various ways. Winograd (1972) dealt with the problem by severely restricting the world. This approach had the po sitive effect of producing a working system and the negative effect of producing one that was only minimally extendable. Charniak (1972) approached the problem from the other end entirely and has made some interesting first steps, but because his work is not grounded in any representational sys tem or any working computational system the res triction of world knowledge need not critically concern him. Our feeling is that an effective characteri zation of knowledge can result in a real under standing system in the not too distant future. We expect that programs based on the theory we out …",
"title": ""
},
{
"docid": "a8da8a2d902c38c6656ea5db841a4eb1",
"text": "The uses of the World Wide Web on the Internet for commerce and information access continue to expand. The e-commerce business has proven to be a promising channel of choice for consumers as it is gradually transforming into a mainstream business activity. However, lack of trust has been identified as a major obstacle to the adoption of online shopping. Empirical study of online trust is constrained by the shortage of high-quality measures of general trust in the e-commence contexts. Based on theoretical or empirical studies in the literature of marketing or information system, nine factors have sound theoretical sense and support from the literature. A survey method was used for data collection in this study. A total of 172 usable questionnaires were collected from respondents. This study presents a new set of instruments for use in studying online trust of an individual. The items in the instrument were analyzed using a factors analysis. The results demonstrated reliable reliability and validity in the instrument.This study identified seven factors has a significant impact on online trust. The seven dominant factors are reputation, third-party assurance, customer service, propensity to trust, website quality, system assurance and brand. As consumers consider that doing business with online vendors involves risk and uncertainty, online business organizations need to overcome these barriers. Further, implication of the finding also provides e-commerce practitioners with guideline for effectively engender online customer trust.",
"title": ""
},
{
"docid": "5289fc231c716e2ce9e051fb0652ce94",
"text": "Noninvasive body contouring has become one of the fastest-growing areas of esthetic medicine. Many patients appear to prefer nonsurgical less-invasive procedures owing to the benefits of fewer side effects and shorter recovery times. Increasingly, 635-nm low-level laser therapy (LLLT) has been used in the treatment of a variety of medical conditions and has been shown to improve wound healing, reduce edema, and relieve acute pain. Within the past decade, LLLT has also emerged as a new modality for noninvasive body contouring. Research has shown that LLLT is effective in reducing overall body circumference measurements of specifically treated regions, including the hips, waist, thighs, and upper arms, with recent studies demonstrating the long-term effectiveness of results. The treatment is painless, and there appears to be no adverse events associated with LLLT. The mechanism of action of LLLT in body contouring is believed to stem from photoactivation of cytochrome c oxidase within hypertrophic adipocytes, which, in turn, affects intracellular secondary cascades, resulting in the formation of transitory pores within the adipocytes' membrane. The secondary cascades involved may include, but are not limited to, activation of cytosolic lipase and nitric oxide. Newly formed pores release intracellular lipids, which are further metabolized. Future studies need to fully outline the cellular and systemic effects of LLLT as well as determine optimal treatment protocols.",
"title": ""
},
{
"docid": "bf257fae514c28dc3b4c31ff656a00e9",
"text": "The objective of the present study is to evaluate the acute effects of low-level laser therapy (LLLT) on functional capacity, perceived exertion, and blood lactate in hospitalized patients with heart failure (HF). Patients diagnosed with systolic HF (left ventricular ejection fraction <45 %) were randomized and allocated prospectively into two groups: placebo LLLT group (n = 10)—subjects who were submitted to placebo laser and active LLLT group (n = 10)—subjects who were submitted to active laser. The 6-min walk test (6MWT) was performed, and blood lactate was determined at rest (before LLLT application and 6MWT), immediately after the exercise test (time 0) and recovery (3, 6, and 30 min). A multi-diode LLLT cluster probe (DMC, São Carlos, Brazil) was used. Both groups increased 6MWT distance after active or placebo LLLT application compared to baseline values (p = 0.03 and p = 0.01, respectively); however, no difference was observed during intergroup comparison. The active LLLT group showed a significant reduction in the perceived exertion Borg (PEB) scale compared to the placebo LLLT group (p = 0.006). In addition, the group that received active LLLT showed no statistically significant difference for the blood lactate level through the times analyzed. The placebo LLLT group demonstrated a significant increase in blood lactate between the rest and recovery phase (p < 0.05). Acute effects of LLLT irradiation on skeletal musculature were not able to improve the functional capacity of hospitalized patients with HF, although it may favorably modulate blood lactate metabolism and reduce perceived muscle fatigue.",
"title": ""
},
{
"docid": "1dd8fdb5f047e58f60c228e076aa8b66",
"text": "Recurrent Neural Network Language Models (RNN-LMs) have recently shown exceptional performance across a variety of applications. In this paper, we modify the architecture to perform Language Understanding, and advance the state-of-the-art for the widely used ATIS dataset. The core of our approach is to take words as input as in a standard RNN-LM, and then to predict slot labels rather than words on the output side. We present several variations that differ in the amount of word context that is used on the input side, and in the use of non-lexical features. Remarkably, our simplest model produces state-of-the-art results, and we advance state-of-the-art through the use of bagof-words, word embedding, named-entity, syntactic, and wordclass features. Analysis indicates that the superior performance is attributable to the task-specific word representations learned by the RNN.",
"title": ""
},
{
"docid": "ba1b3fb5f147b5af173e5f643a2794e0",
"text": "The objective of this study is to examine how personal factors such as lifestyle, personality, and economic situations affect the consumer behavior of Malaysian university students. A quantitative approach was adopted and a self-administered questionnaire was distributed to collect data from university students. Findings illustrate that ‘personality’ influences the consumer behavior among Malaysian university student. This study also noted that the economic situation had a negative relationship with consumer behavior. Findings of this study improve our understanding of consumer behavior of Malaysian University Students. The findings of this study provide valuable insights in identifying and taking steps to improve on the services, ambience, and needs of the student segment of the Malaysian market.",
"title": ""
},
{
"docid": "71fe8a71c2855499834b2f6a60b2a759",
"text": "The pomegranate, Punica granatum L., is an ancient, mystical, unique fruit borne on a small, long-living tree cultivated throughout the Mediterranean region, as far north as the Himalayas, in Southeast Asia, and in California and Arizona in the United States. In addition to its ancient historical uses, pomegranate is used in several systems of medicine for a variety of ailments. The synergistic action of the pomegranate constituents appears to be superior to that of single constituents. In the past decade, numerous studies on the antioxidant, anticarcinogenic, and anti-inflammatory properties of pomegranate constituents have been published, focusing on treatment and prevention of cancer, cardiovascular disease, diabetes, dental conditions, erectile dysfunction, bacterial infections and antibiotic resistance, and ultraviolet radiation-induced skin damage. Other potential applications include infant brain ischemia, male infertility, Alzheimer's disease, arthritis, and obesity.",
"title": ""
},
{
"docid": "5718c733a80805c5dbb4333c2d298980",
"text": "{Portions reprinted, with permission from Keim et al. #2001 IEEE Abstract Simple presentation graphics are intuitive and easy-to-use, but show only highly aggregated data presenting only a very small number of data values (as in the case of bar charts) and may have a high degree of overlap occluding a significant portion of the data values (as in the case of the x-y plots). In this article, the authors therefore propose a generalization of traditional bar charts and x-y plots, which allows the visualization of large amounts of data. The basic idea is to use the pixels within the bars to present detailed information of the data records. The so-called pixel bar charts retain the intuitiveness of traditional bar charts while allowing very large data sets to be visualized in an effective way. It is shown that, for an effective pixel placement, a complex optimization problem has to be solved. The authors then present an algorithm which efficiently solves the problem. The application to a number of real-world ecommerce data sets shows the wide applicability and usefulness of this new idea, and a comparison to other well-known visualization techniques (parallel coordinates and spiral techniques) shows a number of clear advantages. Information Visualization (2002) 1, 20 – 34. DOI: 10.1057/palgrave/ivs/9500003",
"title": ""
},
{
"docid": "b290b3b9db5e620e8a049ad9cd68346b",
"text": "THE USE OF OBSERVATIONAL RESEARCH METHODS in the field of palliative care is vital to building the evidence base, identifying best practices, and understanding disparities in access to and delivery of palliative care services. As discussed in the introduction to this series, research in palliative care encompasses numerous areas in which the gold standard research design, the randomized controlled trial (RCT), is not appropriate, adequate, or even possible.1,2 The difficulties in conducting RCTs in palliative care include patient and family recruitment, gate-keeping by physicians, crossover contamination, high attrition rates, small sample sizes, and limited survival times. Furthermore, a number of important issues including variation in access to palliative care and disparities in the use and provision of palliative care simply cannot be answered without observational research methods. As research in palliative care broadens to encompass study designs other than the RCT, the collective understanding of the use, strengths, and limitations of observational research methods is critical. The goals of this first paper are to introduce the major types of observational study designs, discuss the issues of precision and validity, and provide practical insights into how to critically evaluate this literature in our field.",
"title": ""
},
{
"docid": "a8fe62e387610682f90018ca1a56ba04",
"text": "Aarskog-Scott syndrome (AAS), also known as faciogenital dysplasia (FGD, OMIM # 305400), is an X-linked disorder of recessive inheritance, characterized by short stature and facial, skeletal, and urogenital abnormalities. AAS is caused by mutations in the FGD1 gene (Xp11.22), with over 56 different mutations identified to date. We present the clinical and molecular analysis of four unrelated families of Mexican origin with an AAS phenotype, in whom FGD1 sequencing was performed. This analysis identified two stop mutations not previously reported in the literature: p.Gln664* and p.Glu380*. Phenotypically, every male patient met the clinical criteria of the syndrome, whereas discrepancies were found between phenotypes in female patients. Our results identify two novel mutations in FGD1, broadening the spectrum of reported mutations; and provide further delineation of the phenotypic variability previously described in AAS.",
"title": ""
},
{
"docid": "f9aa9bdad364b7c4b6a4b67120686d9a",
"text": "In this paper, we describe an SDN-based plastic architecture for 5G networks, designed to fulfill functional and performance requirements of new generation services and devices. The 5G logical architecture is presented in detail, and key procedures for dynamic control plane instantiation, device attachment, and service request and mobility management are specified. Key feature of the proposed architecture is flexibility, needed to support efficiently a heterogeneous set of services, including Machine Type Communication, Vehicle to X and Internet of Things traffic. These applications are imposing challenging targets, in terms of end-to-end latency, dependability, reliability and scalability. Additionally, backward compatibility with legacy systems is guaranteed by the proposed solution, and Control Plane and Data Plane are fully decoupled. The three levels of unified signaling unify Access, Non-access and Management strata, and a clean-slate forwarding layer, designed according to the software defined networking principle, replaces tunneling protocols for carrier grade mobility. Copyright © 2014 John Wiley & Sons, Ltd. *Correspondence R. Trivisonno, Huawei European Research Institute, Munich, Germany. E-mail: [email protected] Received 13 October 2014; Revised 5 November 2014; Accepted 8 November 2014",
"title": ""
},
{
"docid": "b93022efa40379ca7cc410d8b10ba48e",
"text": "The shared nature of the network in today's multi-tenant datacenters implies that network performance for tenants can vary significantly. This applies to both production datacenters and cloud environments. Network performance variability hurts application performance which makes tenant costs unpredictable and causes provider revenue loss. Motivated by these factors, this paper makes the case for extending the tenant-provider interface to explicitly account for the network. We argue this can be achieved by providing tenants with a virtual network connecting their compute instances. To this effect, the key contribution of this paper is the design of virtual network abstractions that capture the trade-off between the performance guarantees offered to tenants, their costs and the provider revenue.\n To illustrate the feasibility of virtual networks, we develop Oktopus, a system that implements the proposed abstractions. Using realistic, large-scale simulations and an Oktopus deployment on a 25-node two-tier testbed, we demonstrate that the use of virtual networks yields significantly better and more predictable tenant performance. Further, using a simple pricing model, we find that the our abstractions can reduce tenant costs by up to 74% while maintaining provider revenue neutrality.",
"title": ""
},
{
"docid": "8f54f2c6e9736a63ea4a99f89090e0a2",
"text": "This article demonstrates how documents prepared in hypertext or word processor format can be saved in portable document format (PDF). These files are self-contained documents that that have the same appearance on screen and in print, regardless of what kind of computer or printer are used, and regardless of what software package was originally used to for their creation. PDF files are compressed documents, invariably smaller than the original files, hence allowing rapid dissemination and download.",
"title": ""
}
] | scidocsrr |
8e6e49e6cb0f4d85f4018da85bfadc80 | Bagging, Boosting and the Random Subspace Method for Linear Classifiers | [
{
"docid": "00ea9078f610b14ed0ed00ed6d0455a7",
"text": "Boosting is a general method for improving the performance of learning algorithms. A recently proposed boosting algorithm, Ada Boost, has been applied with great success to several benchmark machine learning problems using mainly decision trees as base classifiers. In this article we investigate whether Ada Boost also works as well with neural networks, and we discuss the advantages and drawbacks of different versions of the Ada Boost algorithm. In particular, we compare training methods based on sampling the training set and weighting the cost function. The results suggest that random resampling of the training data is not the main explanation of the success of the improvements brought by Ada Boost. This is in contrast to bagging, which directly aims at reducing variance and for which random resampling is essential to obtain the reduction in generalization error. Our system achieves about 1.4 error on a data set of on-line handwritten digits from more than 200 writers. A boosted multilayer network achieved 1.5 error on the UCI letters and 8.1 error on the UCI satellite data set, which is significantly better than boosted decision trees.",
"title": ""
}
] | [
{
"docid": "3606b1c9bc5003c6119a5cc675ad63f4",
"text": "Hypothyroidism is a clinical disorder commonly encountered by the primary care physician. Untreated hypothyroidism can contribute to hypertension, dyslipidemia, infertility, cognitive impairment, and neuromuscular dysfunction. Data derived from the National Health and Nutrition Examination Survey suggest that about one in 300 persons in the United States has hypothyroidism. The prevalence increases with age, and is higher in females than in males. Hypothyroidism may occur as a result of primary gland failure or insufficient thyroid gland stimulation by the hypothalamus or pituitary gland. Autoimmune thyroid disease is the most common etiology of hypothyroidism in the United States. Clinical symptoms of hypothyroidism are nonspecific and may be subtle, especially in older persons. The best laboratory assessment of thyroid function is a serum thyroid-stimulating hormone test. There is no evidence that screening asymptomatic adults improves outcomes. In the majority of patients, alleviation of symptoms can be accomplished through oral administration of synthetic levothyroxine, and most patients will require lifelong therapy. Combination triiodothyronine/thyroxine therapy has no advantages over thyroxine monotherapy and is not recommended. Among patients with subclinical hypothyroidism, those at greater risk of progressing to clinical disease, and who may be considered for therapy, include patients with thyroid-stimulating hormone levels greater than 10 mIU per L and those who have elevated thyroid peroxidase antibody titers.",
"title": ""
},
{
"docid": "6c175d7a90ed74ab3b115977c82b0ffa",
"text": "We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of connections follow power laws that indicate a scale-free pattern of connectivity, with most nodes having relatively few connections joined together through a small number of hubs with many connections. These regularities have also been found in certain other complex natural networks, such as the World Wide Web, but they are not consistent with many conventional models of semantic organization, based on inheritance hierarchies, arbitrarily structured networks, or high-dimensional vector spaces. We propose that these structures reflect the mechanisms by which semantic networks grow. We describe a simple model for semantic growth, in which each new word or concept is connected to an existing network by differentiating the connectivity pattern of an existing node. This model generates appropriate small-world statistics and power-law connectivity distributions, and it also suggests one possible mechanistic basis for the effects of learning history variables (age of acquisition, usage frequency) on behavioral performance in semantic processing tasks.",
"title": ""
},
{
"docid": "8933d92ec139e80ffb8f0ebaa909d76c",
"text": "Reading an article and answering questions about its content is a fundamental task for natural language understanding. While most successful neural approaches to this problem rely on recurrent neural networks (RNNs), training RNNs over long documents can be prohibitively slow. We present a novel framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance. Our approach combines a coarse, inexpensive model for selecting one or more relevant sentences and a more expensive RNN that produces the answer from those sentences. A central challenge is the lack of intermediate supervision for the coarse model, which we address using reinforcement learning. Experiments demonstrate state-of-the-art performance on a challenging subset of the WIKIREADING dataset (Hewlett et al., 2016) and on a newly-gathered dataset, while reducing the number of sequential RNN steps by 88% against a standard sequence to sequence model.",
"title": ""
},
{
"docid": "86826e10d531b8d487fada7a5c151a41",
"text": "Feature selection is an important preprocessing step in data mining. Mutual information-based feature selection is a kind of popular and effective approaches. In general, most existing mutual information-based techniques are greedy methods, which are proven to be efficient but suboptimal. In this paper, mutual information-based feature selection is transformed into a global optimization problem, which provides a new idea for solving feature selection problems. Firstly, a single-objective feature selection algorithm combining relevance and redundancy is presented, which has well global searching ability and high computational efficiency. Furthermore, to improve the performance of feature selection, we propose a multi-objective feature selection algorithm. The method can meet different requirements and achieve a tradeoff among multiple conflicting objectives. On this basis, a hybrid feature selection framework is adopted for obtaining a final solution. We compare the performance of our algorithm with related methods on both synthetic and real datasets. Simulation results show the effectiveness and practicality of the proposed method.",
"title": ""
},
{
"docid": "2582b0fffad677d3f0ecf11b92d9702d",
"text": "This study explores teenage girls' narrations of the relationship between self-presentation and peer comparison on social media in the context of beauty. Social media provide new platforms that manifest media and peer influences on teenage girls' understanding of beauty towards an idealized notion. Through 24 in-depth interviews, this study examines secondary school girls' self-presentation and peer comparison behaviors on social network sites where the girls posted self-portrait photographs or “selfies” and collected peer feedback in the forms of “likes,” “followers,” and comments. Results of thematic analysis reveal a gap between teenage girls' self-beliefs and perceived peer standards of beauty. Feelings of low self-esteem and insecurity underpinned their efforts in edited self-presentation and quest for peer recognition. Peers played multiple roles that included imaginary audiences, judges, vicarious learning sources, and comparison targets in shaping teenage girls' perceptions and presentation of beauty. Findings from this study reveal the struggles that teenage girls face today and provide insights for future investigations and interventions pertinent to teenage girls’ presentation and evaluation of self on",
"title": ""
},
{
"docid": "13afc7b4786ee13c6b0bfb1292f50153",
"text": "Heavy metals are discharged into water from various industries. They can be toxic or carcinogenic in nature and can cause severe problems for humans and aquatic ecosystems. Thus, the removal of heavy metals fromwastewater is a serious problem. The adsorption process is widely used for the removal of heavy metals from wastewater because of its low cost, availability and eco-friendly nature. Both commercial adsorbents and bioadsorbents are used for the removal of heavy metals fromwastewater, with high removal capacity. This review article aims to compile scattered information on the different adsorbents that are used for heavy metal removal and to provide information on the commercially available and natural bioadsorbents used for removal of chromium, cadmium and copper, in particular. This is an Open Access article distributed under the terms of the Creative Commons Attribution Licence (CC BY-NC-ND 4.0), which permits copying and redistribution for non-commercial purposes with no derivatives, provided the original work is properly cited (http://creativecommons.org/ licenses/by-nc-nd/4.0/). doi: 10.2166/wrd.2016.104 Renu Madhu Agarwal (corresponding author) K. Singh Department of Chemical Engineering, Malaviya National Institute of Technology, JLN Marg, Jaipur 302017, India E-mail: [email protected]",
"title": ""
},
{
"docid": "b86fed0ebcf017adedbe9f3d14d6903d",
"text": "The general employee scheduling problem extends the standard shift scheduling problem by discarding key limitations such as employee homogeneity and the absence of connections across time period blocks. The resulting increased generality yields a scheduling model that applies to real world problems confronted in a wide variety of areas. The price of the increased generality is a marked increase in size and complexity over related models reported in the literature. The integer programming formulation for the general employee scheduling problem, arising in typical real world settings, contains from one million to over four million zero~ne variables. By contrast, studies of special cases reported over the past decade have focused on problems involving between 100 and 500 variables. We characterize the relationship between the general employee scheduling problem and related problems, reporting computational results for a procedure that solves these more complex problems within 98-99 % optimality and runs on a microcomputer. We view our approach as an integration of management science and artificial intelligence techniques. The benefits of such an integration are suggested by the fact that other zero~ne scheduling implementations reported in the literature, including the one awarded the Lancaster Prize in 1984, have obtained comparable approximations of optimality only for problems from two to three orders of magnitude smaller, and then only by the use of large mainframe computers.",
"title": ""
},
{
"docid": "df0e13e1322a95046a91fb7c867d968a",
"text": "Taking into consideration both external (i.e. technology acceptance factors, website service quality) as well as internal factors (i.e. specific holdup cost) , this research explores how the customers’ satisfaction and loyalty, when shopping and purchasing on the internet , can be associated with each other and how they are affected by the above dynamics. This research adopts the Structural Equation Model (SEM) as the main analytical tool. It investigates those who used to have shopping experiences in major shopping websites of Taiwan. The research results point out the following: First, customer satisfaction will positively influence customer loyalty directly; second, technology acceptance factors will positively influence customer satisfaction and loyalty directly; third, website service quality can positively influence customer satisfaction and loyalty directly; and fourth, specific holdup cost can positively influence customer loyalty directly, but cannot positively influence customer satisfaction directly. This paper draws on the research results for implications of managerial practice, and then suggests some empirical tactics in order to help enhancing management performance for the website shopping industry.",
"title": ""
},
{
"docid": "fb836666c993b27b99f6c789dd0aae05",
"text": "Software transactions have received significant attention as a way to simplify shared-memory concurrent programming, but insufficient focus has been given to the precise meaning of software transactions or their interaction with other language features. This work begins to rectify that situation by presenting a family of formal languages that model a wide variety of behaviors for software transactions. These languages abstract away implementation details of transactional memory, providing high-level definitions suitable for programming languages. We use small-step semantics in order to represent explicitly the interleaved execution of threads that is necessary to investigate pertinent issues.\n We demonstrate the value of our core approach to modeling transactions by investigating two issues in depth. First, we consider parallel nesting, in which parallelism and transactions can nest arbitrarily. Second, we present multiple models for weak isolation, in which nontransactional code can violate the isolation of a transaction. For both, type-and-effect systems let us soundly and statically restrict what computation can occur inside or outside a transaction. We prove some key language-equivalence theorems to confirm that under sufficient static restrictions, in particular that each mutable memory location is used outside transactions or inside transactions (but not both), no program can determine whether the language implementation uses weak isolation or strong isolation.",
"title": ""
},
{
"docid": "e5f6d7ed8d2dbf0bc2cde28e9c9e129b",
"text": "Change detection is the process of finding out difference between two images taken at two different times. With the help of remote sensing the . Here we will try to find out the difference of the same image taken at different times. here we use mean ratio and log ratio to find out the difference in the images. Log is use to find background image and fore ground detected by mean ratio. A reformulated fuzzy local-information C-means clustering algorithm is proposed for classifying changed and unchanged regions in the fused difference image. It incorporates the information about spatial context in a novel fuzzy way for the purpose of enhancing the changed information and of reducing the effect of speckle noise. Experiments on real SAR images show that the image fusion strategy integrates the advantages of the log-ratio operator and the mean-ratio operator and gains a better performance. The change detection results obtained by the improved fuzzy clustering algorithm exhibited lower error than its preexistences.",
"title": ""
},
{
"docid": "92699fa23a516812c7fcb74ba38f42c6",
"text": "Deep reinforcement learning (DRL) is poised to revolutionize the field of artificial intelligence (AI) and represents a step toward building autonomous systems with a higherlevel understanding of the visual world. Currently, deep learning is enabling reinforcement learning (RL) to scale to problems that were previously intractable, such as learning to play video games directly from pixels. DRL algorithms are also applied to robotics, allowing control policies for robots to be learned directly from camera inputs in the real world. In this survey, we begin with an introduction to the general field of RL, then progress to the main streams of value-based and policy-based methods. Our survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic. In parallel, we highlight the unique advantages of deep neural networks, focusing on visual understanding via RL. To conclude, we describe several current areas of research within the field.",
"title": ""
},
{
"docid": "a94278bafc093c37bcba719a4b6a03fa",
"text": "Community detection and analysis is an important methodology for understanding the organization of various real-world networks and has applications in problems as diverse as consensus formation in social communities or the identification of functional modules in biochemical networks. Currently used algorithms that identify the community structures in large-scale real-world networks require a priori information such as the number and sizes of communities or are computationally expensive. In this paper we investigate a simple label propagation algorithm that uses the network structure alone as its guide and requires neither optimization of a predefined objective function nor prior information about the communities. In our algorithm every node is initialized with a unique label and at every step each node adopts the label that most of its neighbors currently have. In this iterative process densely connected groups of nodes form a consensus on a unique label to form communities. We validate the algorithm by applying it to networks whose community structures are known. We also demonstrate that the algorithm takes an almost linear time and hence it is computationally less expensive than what was possible so far.",
"title": ""
},
{
"docid": "d469d31d26d8bc07b9d8dfa8ce277e47",
"text": "BACKGROUND/PURPOSE\nMorbidity in children treated with appendicitis results either from late diagnosis or negative appendectomy. A Prospective analysis of efficacy of Pediatric Appendicitis Score for early diagnosis of appendicitis in children was conducted.\n\n\nMETHODS\nIn the last 5 years, 1,170 children aged 4 to 15 years with abdominal pain suggestive of acute appendicitis were evaluated prospectively. Group 1 (734) were patients with appendicitis and group 2 (436) nonappendicitis. Multiple linear logistic regression analysis of all clinical and investigative parameters was performed for a model comprising 8 variables to form a diagnostic score.\n\n\nRESULTS\nLogistic regression analysis yielded a model comprising 8 variables, all statistically significant, P <.001. These variables in order of their diagnostic index were (1) cough/percussion/hopping tenderness in the right lower quadrant of the abdomen (0.96), (2) anorexia (0.88), (3) pyrexia (0.87), (4) nausea/emesis (0.86), (5) tenderness over the right iliac fossa (0.84), (6) leukocytosis (0.81), (7) polymorphonuclear neutrophilia (0.80) and (8) migration of pain (0.80). Each of these variables was assigned a score of 1, except for physical signs (1 and 5), which were scored 2 to obtain a total of 10. The Pediatric Appendicitis Score had a sensitivity of 1, specificity of 0.92, positive predictive value of 0.96, and negative predictive value of 0.99.\n\n\nCONCLUSION\nPediatric appendicitis score is a simple, relatively accurate diagnostic tool for accessing an acute abdomen and diagnosing appendicitis in children.",
"title": ""
},
{
"docid": "e1adb8ebfd548c2aca5110e2a9e8d667",
"text": "This paper introduces an active object detection and localization framework that combines a robust untextured object detection and 3D pose estimation algorithm with a novel next-best-view selection strategy. We address the detection and localization problems by proposing an edge-based registration algorithm that refines the object position by minimizing a cost directly extracted from a 3D image tensor that encodes the minimum distance to an edge point in a joint direction/location space. We face the next-best-view problem by exploiting a sequential decision process that, for each step, selects the next camera position which maximizes the mutual information between the state and the next observations. We solve the intrinsic intractability of this solution by generating observations that represent scene realizations, i.e. combination samples of object hypothesis provided by the object detector, while modeling the state by means of a set of constantly resampled particles. Experiments performed on different real world, challenging datasets confirm the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "2038dbe6e16892c8d37a4dac47d4f681",
"text": "Sentences with different structures may convey the same meaning. Identification of sentences with paraphrases plays an important role in text related research and applications. This work focus on the statistical measures and semantic analysis of Malayalam sentences to detect the paraphrases. The statistical similarity measures between sentences, based on symbolic characteristics and structural information, could measure the similarity between sentences without any prior knowledge but only on the statistical information of sentences. The semantic representation of Universal Networking Language(UNL), represents only the inherent meaning in a sentence without any syntactic details. Thus, comparing the UNL graphs of two sentences can give an insight into how semantically similar the two sentences are. Combination of statistical similarity and semantic similarity score results the overall similarity score. This is the first attempt towards paraphrases of malayalam sentences.",
"title": ""
},
{
"docid": "259e95c8d756f31408d30bbd7660eea3",
"text": "The capacity to identify cheaters is essential for maintaining balanced social relationships, yet humans have been shown to be generally poor deception detectors. In fact, a plethora of empirical findings holds that individuals are only slightly better than chance when discerning lies from truths. Here, we report 5 experiments showing that judges' ability to detect deception greatly increases after periods of unconscious processing. Specifically, judges who were kept from consciously deliberating outperformed judges who were encouraged to do so or who made a decision immediately; moreover, unconscious thinkers' detection accuracy was significantly above chance level. The reported experiments further show that this improvement comes about because unconscious thinking processes allow for integrating the particularly rich information basis necessary for accurate lie detection. These findings suggest that the human mind is not unfit to distinguish between truth and deception but that this ability resides in previously overlooked processes.",
"title": ""
},
{
"docid": "49a87829a12168de2be2ee32a23ddeb7",
"text": "Crowdsourcing emerged with the development of Web 2.0 technologies as a distributed online practice that harnesses the collective aptitudes and skills of the crowd in order to reach specific goals. The success of crowdsourcing systems is influenced by the users’ levels of participation and interactions on the platform. Therefore, there is a need for the incorporation of appropriate incentive mechanisms that would lead to sustained user engagement and quality contributions. Accordingly, the aim of the particular paper is threefold: first, to provide an overview of user motives and incentives, second, to present the corresponding incentive mechanisms used to trigger these motives, alongside with some indicative examples of successful crowdsourcing platforms that incorporate these incentive mechanisms, and third, to provide recommendations on their careful design in order to cater to the context and goal of the platform.",
"title": ""
},
{
"docid": "0b3555b8c1932a2364a7264cbf2f7c25",
"text": "This paper introduces a novel weighted unsupervised learning for object detection using an RGB-D camera. This technique is feasible for detecting the moving objects in the noisy environments that are captured by an RGB-D camera. The main contribution of this paper is a real-time algorithm for detecting each object using weighted clustering as a separate cluster. In a preprocessing step, the algorithm calculates the pose 3D position X, Y, Z and RGB color of each data point and then it calculates each data point’s normal vector using the point’s neighbor. After preprocessing, our algorithm calculates k-weights for each data point; each weight indicates membership. Resulting in clustered objects of the scene. Keywords—Weighted Unsupervised Learning, Object Detection, RGB-D camera, Kinect",
"title": ""
},
{
"docid": "abda48a065aecbe34f86ce3490520402",
"text": "Wireless Sensor Network (WSN) consists of small low-cost, low-power multifunctional nodes interconnected to efficiently aggregate and transmit data to sink. Cluster-based approaches use some nodes as Cluster Heads (CHs) and organize WSNs efficiently for aggregation of data and energy saving. A CH conveys information gathered by cluster nodes and aggregates/compresses data before transmitting it to a sink. However, this additional responsibility of the node results in a higher energy drain leading to uneven network degradation. Low Energy Adaptive Clustering Hierarchy (LEACH) offsets this by probabilistically rotating cluster heads role among nodes with energy above a set threshold. CH selection in WSN is NP-Hard as optimal data aggregation with efficient energy savings cannot be solved in polynomial time. In this work, a modified firefly heuristic, synchronous firefly algorithm, is proposed to improve the network performance. Extensive simulation shows the proposed technique to perform well compared to LEACH and energy-efficient hierarchical clustering. Simulations show the effectiveness of the proposed method in decreasing the packet loss ratio by an average of 9.63% and improving the energy efficiency of the network when compared to LEACH and EEHC.",
"title": ""
},
{
"docid": "09168164e47fd781e4abeca45fb76c35",
"text": "AUTOSAR is a standard for the development of software for embedded devices, primarily created for the automotive domain. It specifies a software architecture with more than 80 software modules that provide services to one or more software components. With the trend towards integrating safety-relevant systems into embedded devices, conformance with standards such as ISO 26262 [ISO11] or ISO/IEC 61508 [IEC10] becomes increasingly important. This article presents an approach to providing freedom from interference between software components by using the MPU available on many modern microcontrollers. Each software component gets its own dedicated memory area, a so-called memory partition. This concept is well known in other industries like the aerospace industry, where the IMA architecture is now well established. The memory partitioning mechanism is implemented by a microkernel, which integrates seamlessly into the architecture specified by AUTOSAR. The development has been performed as SEooC as described in ISO 26262, which is a new development approach. We describe the procedure for developing an SEooC. AUTOSAR: AUTomotive Open System ARchitecture, see [ASR12]. MPU: Memory Protection Unit. 3 IMA: Integrated Modular Avionics, see [RTCA11]. 4 SEooC: Safety Element out of Context, see [ISO11].",
"title": ""
}
] | scidocsrr |
57efca4f00bb10f737800d3d006c3ce9 | Real-Time Data Analytics in Sensor Networks | [
{
"docid": "2abd75766d4875921edd4d6d63d5d617",
"text": "Wireless sensor networks typically consist of a large number of sensor nodes embedded in a physical space. Such sensors are low-power devices that are primarily used for monitoring several physical phenomena, potentially in remote harsh environments. Spatial and temporal dependencies between the readings at these nodes highly exist in such scenarios. Statistical contextual information encodes these spatio-temporal dependencies. It enables the sensors to locally predict their current readings based on their own past readings and the current readings of their neighbors. In this paper, we introduce context-aware sensors. Specifically, we propose a technique for modeling and learning statistical contextual information in sensor networks. Our approach is based on Bayesian classifiers; we map the problem of learning and utilizing contextual information to the problem of learning the parameters of a Bayes classifier, and then making inferences, respectively. We propose a scalable and energy-efficient procedure for online learning of these parameters in-network, in a distributed fashion. We discuss applications of our approach in discovering outliers and detection of faulty sensors, approximation of missing values, and in-network sampling. We experimentally analyze our approach in two applications, tracking and monitoring.",
"title": ""
}
] | [
{
"docid": "a17bf7467da65eede493d543a335c9ae",
"text": "Recently interest has grown in applying activity theory, the leading theoretical approach in Russian psychology, to issues of human-computer interaction. This chapter analyzes why experts in the field are looking for an alternative to the currently dominant cognitive approach. The basic principles of activity theory are presented and their implications for human-computer interaction are discussed. The chapter concludes with an outline of the potential impact of activity theory on studies and design of computer use in real-life settings.",
"title": ""
},
{
"docid": "18140fdf4629a1c7528dcd6060f427c3",
"text": "Central to many text analysis methods is the notion of a concept: a set of semantically related keywords characterizing a specific object, phenomenon, or theme. Advances in word embedding allow building a concept from a small set of seed terms. However, naive application of such techniques may result in false positive errors because of the polysemy of natural language. To mitigate this problem, we present a visual analytics system called ConceptVector that guides a user in building such concepts and then using them to analyze documents. Document-analysis case studies with real-world datasets demonstrate the fine-grained analysis provided by ConceptVector. To support the elaborate modeling of concepts, we introduce a bipolar concept model and support for specifying irrelevant words. We validate the interactive lexicon building interface by a user study and expert reviews. Quantitative evaluation shows that the bipolar lexicon generated with our methods is comparable to human-generated ones.",
"title": ""
},
{
"docid": "f1d00811120f666763e56e33ad2c3b10",
"text": "Fairness is a critical trait in decision making. As machine-learning models are increasingly being used in sensitive application domains (e.g. education and employment) for decision making, it is crucial that the decisions computed by such models are free of unintended bias. But how can we automatically validate the fairness of arbitrary machine-learning models? For a given machine-learning model and a set of sensitive input parameters, our Aeqitas approach automatically discovers discriminatory inputs that highlight fairness violation. At the core of Aeqitas are three novel strategies to employ probabilistic search over the input space with the objective of uncovering fairness violation. Our Aeqitas approach leverages inherent robustness property in common machine-learning models to design and implement scalable test generation methodologies. An appealing feature of our generated test inputs is that they can be systematically added to the training set of the underlying model and improve its fairness. To this end, we design a fully automated module that guarantees to improve the fairness of the model. We implemented Aeqitas and we have evaluated it on six stateof- the-art classifiers. Our subjects also include a classifier that was designed with fairness in mind. We show that Aeqitas effectively generates inputs to uncover fairness violation in all the subject classifiers and systematically improves the fairness of respective models using the generated test inputs. In our evaluation, Aeqitas generates up to 70% discriminatory inputs (w.r.t. the total number of inputs generated) and leverages these inputs to improve the fairness up to 94%.",
"title": ""
},
{
"docid": "fa0c62b91643a45a5eff7c1b1fa918f1",
"text": "This paper presents outdoor field experimental results to clarify the 4x4 MIMO throughput performance from applying multi-point transmission in the 15 GHz frequency band in the downlink of 5G cellular radio access system. The experimental results in large-cell scenario shows that up to 30 % throughput gain compared to non-multi-point transmission is achieved although the difference for the RSRP of two TPs is over 10 dB, so that the improvement for the antenna correlation is achievable and important aspect for the multi-point transmission in the 15 GHz frequency band as well as the improvement of the RSRP. Furthermore in small-cell scenario, the throughput gain of 70% and over 5 Gbps are achieved applying multi-point transmission in the condition of two different MIMO streams transmission from a single TP as distributed MIMO instead of four MIMO streams transmission from a single TP.",
"title": ""
},
{
"docid": "be9d13a24f41eadc0a1d15d99e594b55",
"text": "Traditionally, mobile robot design is based on wheels, tracks or legs with their respective advantages and disadvantages. Very few groups have explored designs with spherical morphology. During the past ten years, the number of robots with spherical shape and related studies has substantially increased, and a lot of work is done in this area of mobile robotics. Interest in robots with spherical morphology has also increased, in part due to NASA's search for an alternative design for a Mars rover since the wheel-based rover Spirit is now stuck for good in soft soil. This paper presents the spherical amphibious robot Groundbot, developed by Rotundus AB in Stockholm, Sweden, and describes in detail the navigation algorithm employed in this system.",
"title": ""
},
{
"docid": "c1477b801a49df62eb978b537fd3935e",
"text": "The striatum is thought to play an essential role in the acquisition of a wide range of motor, perceptual, and cognitive skills, but neuroimaging has not yet demonstrated striatal activation during nonmotor skill learning. Functional magnetic resonance imaging was performed while participants learned probabilistic classification, a cognitive task known to rely on procedural memory early in learning and declarative memory later in learning. Multiple brain regions were active during probabilistic classification compared with a perceptual-motor control task, including bilateral frontal cortices, occipital cortex, and the right caudate nucleus in the striatum. The left hippocampus was less active bilaterally during probabilistic classification than during the control task, and the time course of this hippocampal deactivation paralleled the expected involvement of medial temporal structures based on behavioral studies of amnesic patients. Findings provide initial evidence for the role of frontostriatal systems in normal cognitive skill learning.",
"title": ""
},
{
"docid": "84f688155a92ed2196974d24b8e27134",
"text": "My sincere thanks to Donald Norman and David Rumelhart for their support of many years. I also wish to acknowledge the help of The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsoring agencies. Approved for public release; distribution unlimited. Reproduction in whole or in part is permitted for any purpose of the United States Government Requests for reprints should be sent to the",
"title": ""
},
{
"docid": "a7bd8b02d7a46e6b96223122f673a222",
"text": "This study was conducted to identify the risk factors that are associated with neonatal mortality in lambs and kids in Jordan. The bacterial causes of mortality in lambs and kids were investigated. One hundred sheep and goat flocks were selected randomly from different areas of North Jordan at the beginning of the lambing season. The flocks were visited every other week to collect information and to take samples from freshly dead animals. By the end of the lambing season, flocks that had neonatal mortality rate ≥ 1.0% were considered as “case group” while flocks that had neonatal mortality rate less than 1.0% − as “control group”. The results indicated that neonatal mortality rate (within 4 weeks of age), in lambs and kids, was 3.2%. However, the early neonatal mortality rate (within 48 hours of age) was 2.01% and represented 62.1% of the neonatal mortalities. The following risk factors were found to be associated with the neonatal mortality in lambs and kids: not separating the neonates from adult animals; not vaccinating dams against infectious diseases (pasteurellosis, colibacillosis and enterotoxemia); walking more than 5 km and starvation-mismothering exposure. The causes of neonatal mortality in lambs and kids were: diarrhea (59.75%), respiratory diseases (13.3%), unknown causes (12.34%), and accident (8.39%). Bacteria responsible for neonatal mortality were: Escherichia coli, Pasteurella multocida, Clostridium perfringens and Staphylococcus aureus. However, E. coli was the most frequent bacterial species identified as cause of neonatal mortality in lambs and kids and represented 63.4% of all bacterial isolates. The E. coli isolates belonged to 10 serogroups, the O44 and O26 being the most frequent isolates.",
"title": ""
},
{
"docid": "1eb4805e6874ea1882a995d0f1861b80",
"text": "The Asian-Pacific Association for the Study of the Liver (APASL) convened an international working party on the \"APASL consensus statements and recommendation on management of hepatitis C\" in March, 2015, in order to revise \"APASL consensus statements and management algorithms for hepatitis C virus infection (Hepatol Int 6:409-435, 2012)\". The working party consisted of expert hepatologists from the Asian-Pacific region gathered at Istanbul Congress Center, Istanbul, Turkey on 13 March 2015. New data were presented, discussed and debated to draft a revision. Participants of the consensus meeting assessed the quality of cited studies. Finalized recommendations on treatment of hepatitis C are presented in this review.",
"title": ""
},
{
"docid": "76ecd4ba20333333af4d09b894ff29fc",
"text": "This study is an application of social identity theory to feminist consciousness and activism. For women, strong gender identifications may enhance support for equality struggles, whereas for men, they may contribute to backlashes against feminism. University students (N � 276), primarily Euroamerican, completed a measure of gender self-esteem (GSE, that part of one’s selfconcept derived from one’s gender), and two measures of feminism. High GSE in women and low GSE in men were related to support for feminism. Consistent with past research, women were more supportive of feminism than men, and in both genders, support for feminist ideas was greater than self-identification as a feminist.",
"title": ""
},
{
"docid": "e5f5aa53a90f482fb46a7f02bae27b20",
"text": "Machinima is a low-cost alternative to full production filmmaking. However, creating quality cinematic visualizations with existing machinima techniques still requires a high degree of talent and effort. We introduce a lightweight artificial intelligence system, Cambot, that can be used to assist in machinima production. Cambot takes a script as input and produces a cinematic visualization. Unlike other virtual cinematography systems, Cambot favors an offline algorithm coupled with an extensible library of specific modular and reusable facets of cinematic knowledge. One of the advantages of this approach to virtual cinematography is a tight coordination between the positions and movements of the camera and the actors.",
"title": ""
},
{
"docid": "240c47d27533069f339d8eb090a637a9",
"text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "eacf295c0cbd52599a1567c6d4193007",
"text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.",
"title": ""
},
{
"docid": "47c88bb234a6e21e8037a67e6dd2444f",
"text": "Lacking an operational theory to explain the organization and behaviour of matter in unicellular and multicellular organisms hinders progress in biology. Such a theory should address life cycles from ontogenesis to death. This theory would complement the theory of evolution that addresses phylogenesis, and would posit theoretical extensions to accepted physical principles and default states in order to grasp the living state of matter and define proper biological observables. Thus, we favour adopting the default state implicit in Darwin’s theory, namely, cell proliferation with variation plus motility, and a framing principle, namely, life phenomena manifest themselves as non-identical iterations of morphogenetic processes. From this perspective, organisms become a consequence of the inherent variability generated by proliferation, motility and self-organization. Morphogenesis would then be the result of the default state plus physical constraints, like gravity, and those present in living organisms, like muscular tension.",
"title": ""
},
{
"docid": "1a1c9b8fa2b5fc3180bc1b504def5ea1",
"text": "Wireless sensor networks can be deployed in any attended or unattended environments like environmental monitoring, agriculture, military, health care etc., where the sensor nodes forward the sensing data to the gateway node. As the sensor node has very limited battery power and cannot be recharged after deployment, it is very important to design a secure, effective and light weight user authentication and key agreement protocol for accessing the sensed data through the gateway node over insecure networks. Most recently, Turkanovic et al. proposed a light weight user authentication and key agreement protocol for accessing the services of the WSNs environment and claimed that the same protocol is efficient in terms of security and complexities than related existing protocols. In this paper, we have demonstrated several security weaknesses of the Turkanovic et al. protocol. Additionally, we have also illustrated that the authentication phase of the Turkanovic et al. is not efficient in terms of security parameters. In order to fix the above mentioned security pitfalls, we have primarily designed a novel architecture for the WSNs environment and basing upon which a proposed scheme has been presented for user authentication and key agreement scheme. The security validation of the proposed protocol has done by using BAN logic, which ensures that the protocol achieves mutual authentication and session key agreement property securely between the entities involved. Moreover, the proposed scheme has simulated using well popular AVISPA security tool, whose simulation results show that the protocol is SAFE under OFMC and CL-AtSe models. Besides, several security issues informally confirm that the proposed protocol is well protected in terms of relevant security attacks including the above mentioned security pitfalls. The proposed protocol not only resists the above mentioned security weaknesses, but also achieves complete security requirements including specially energy efficiency, user anonymity, mutual authentication and user-friendly password change phase. Performance comparison section ensures that the protocol is relatively efficient in terms of complexities. The security and performance analysis makes the system so efficient that the proposed protocol can be implemented in real-life application. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "7c54cef80d345cdb10f56ca440f5fad9",
"text": "SIR, Arndt–Gottron scleromyxoedema is a rare fibromucinous disorder regarded as a variant of the lichen myxoedematosus. The diagnostic criteria are a generalized papular and sclerodermoid eruption, a microscopic triad of mucin deposition, fibroblast proliferation and fibrosis, a monoclonal gammopathy (mostly IgG-k paraproteinaemia) and the absence of a thyroid disorder. This disease initially presents with sclerosis of the skin and clusters of small lichenoid papules with a predilection for the face, neck and the forearm. Progressively, the skin lesions can become more widespread and the induration of skin can result in a scleroderma-like condition with sclerodactyly and microstomia, reduced mobility and disability. Systemic involvement is common, e.g. upper gastrointestinal dysmotility, proximal myopathy, joint contractures, neurological complications such as psychic disturbances and encephalopathy, obstructive ⁄restrictive lung disease, as well as renal and cardiovascular involvement. Numerous treatment options have been described in the literature. These include corticosteroids, retinoids, thalidomide, extracorporeal photopheresis (ECP), psoralen plus ultraviolet A radiation, ciclosporin, cyclophosphamide, melphalan or autologous stem cell transplantation. In September 1999, a 48-year-old white female first noticed an erythematous induration with a lichenoid papular eruption on her forehead. Three months later the lesions became more widespread including her face (Fig. 1a), neck, shoulders, forearms (Fig. 2a) and legs. When the patient first presented in our department in June 2000, she had problems opening her mouth fully as well as clenching both hands or moving her wrist. The histological examination of the skin biopsy was highly characteristic of Arndt–Gottron scleromyxoedema. Full blood count, blood morphology, bone marrow biopsy, bone scintigraphy and thyroid function tests were normal. Serum immunoelectrophoresis revealed an IgG-k paraproteinaemia. Urinary Bence-Jones proteins were negative. No systemic involvement was disclosed. We initiated ECP therapy in August 2000, initially at 2-week intervals (later monthly) on two succeeding days. When there was no improvement after 3 months, we also administered cyclophosphamide (Endoxana ; Baxter Healthcare Ltd, Newbury, U.K.) at a daily dose of 100 mg with mesna 400 mg (Uromitexan ; Baxter) prophylaxis. The response to this therapy was rather moderate. In February 2003 the patient developed a change of personality and loss of orientation and was admitted to hospital. The extensive neurological, radiological and microbiological diagnostics were unremarkable at that time. A few hours later the patient had seizures and was put on artificial ventilation in an intensive care unit. The patient was comatose for several days. A repeated magnetic resonance imaging scan was still normal, but the cerebrospinal fluid tap showed a dysfunction of the blood–cerebrospinal fluid barrier. A bilateral loss of somatosensory evoked potentials was noticeable. The neurological symptoms were classified as a ‘dermatoneuro’ syndrome, a rare extracutaneous manifestation of scleromyxoedema. After initiation of treatment with methylprednisolone (Urbason ; Aventis, Frankfurt, Germany) the neurological situation normalized in the following 2 weeks. No further medical treatment was necessary. In April 2003 therapy options were re-evaluated and the patient was started and maintained on a 7-day course of melphalan 7.5 mg daily (Alkeran ; GlaxoSmithKline, Uxbridge, U.K.) in combination with prednisolone 40 mg daily (Decortin H ; Merck, Darmstadt, Germany) every 6 weeks. This treat(a)",
"title": ""
},
{
"docid": "d37d6139ced4c85ff0cbc4cce018212b",
"text": "We describe isone, a tool that facilitates the visual exploration of social networks. Social network analysis is a methodological approach in the social sciences using graph-theoretic concepts to describe, understand and explain social structure. The isone software is an attempt to integrate analysis and visualization of social networks and is intended to be used in research and teaching. While we are primarily focussing on users in the social sciences, several features provided in the tool will be useful in other fields as well. In contrast to more conventional mathematical software in the social sciences that aim at providing a comprehensive suite of analytical options, our emphasis is on complementing every option we provide with tailored means of graphical interaction. We attempt to make complicated types of analysis and data handling transparent, intuitive, and more readily accessible. User feedback indicates that many who usually regard data exploration and analysis complicated and unnerving enjoy the playful nature of visual interaction. Consequently, much of the tool is about graph drawing methods specifically adapted to facilitate visual data exploration. The origins of isone lie in an interdisciplinary cooperation with researchers from political science which resulted in innovative uses of graph drawing methods for social network visualization, and prototypical implementations thereof. With the growing demand for access to these methods, we started implementing an integrated tool for public use. It should be stressed, however, that isone remains a research platform and testbed for innovative methods, and is not intended to become",
"title": ""
},
{
"docid": "742c0b15f6a466bfb4e5130b49f79e64",
"text": "There has been much interest in unsupervised learning of hierarchical generative models such as deep belief networks (DBNs); however, scaling such models to full-sized, high-dimensional images remains a difficult problem. To address this problem, we present the convolutional deep belief network, a hierarchical generative model that scales to realistic image sizes. This model is translation-invariant and supports efficient bottom-up and top-down probabilistic inference. Key to our approach is probabilistic max-pooling, a novel technique that shrinks the representations of higher layers in a probabilistically sound way. Our experiments show that the algorithm learns useful high-level visual features, such as object parts, from unlabeled images of objects and natural scenes. We demonstrate excellent performance on several visual recognition tasks and show that our model can perform hierarchical (bottom-up and top-down) inference over full-sized images.",
"title": ""
},
{
"docid": "c4aafcc0a98882de931713359e55a04a",
"text": "We present a computer vision tool that analyses video from a CCTV system installed on fishing trawlers to monitor discarded fish catch. The system aims to support expert observers who review the footage and verify numbers, species and sizes of discarded fish. The operational environment presents a significant challenge for these tasks. Fish are processed below deck under fluorescent lights, they are randomly oriented and there are multiple occlusions. The scene is unstructured and complicated by the presence of fishermen processing the catch. We describe an approach to segmenting the scene and counting fish that exploits the N4-Fields algorithm. We performed extensive tests of the algorithm on a data set comprising 443 frames from 6 belts. Results indicate the relative count error (for individual fish) ranges from 2% to 16%. We believe this is the first system that is able to handle footage from operational trawlers.",
"title": ""
},
{
"docid": "1e493440a61578c8c6ca8fbe63f475d6",
"text": "3D object detection is an essential task in autonomous driving. Recent techniques excel with highly accurate detection rates, provided the 3D input data is obtained from precise but expensive LiDAR technology. Approaches based on cheaper monocular or stereo imagery data have, until now, resulted in drastically lower accuracies — a gap that is commonly attributed to poor image-based depth estimation. However, in this paper we argue that data representation (rather than its quality) accounts for the majority of the difference. Taking the inner workings of convolutional neural networks into consideration, we propose to convert imagebased depth maps to pseudo-LiDAR representations — essentially mimicking LiDAR signal. With this representation we can apply different existing LiDAR-based detection algorithms. On the popular KITTI benchmark, our approach achieves impressive improvements over the existing stateof-the-art in image-based performance — raising the detection accuracy of objects within 30m range from the previous state-of-the-art of 22% to an unprecedented 74%. At the time of submission our algorithm holds the highest entry on the KITTI 3D object detection leaderboard for stereo image based approaches.",
"title": ""
}
] | scidocsrr |
cbe333e5804af8a9778780bff57dc255 | Health Media: From Multimedia Signals to Personal Health Insights | [
{
"docid": "e95253b765129a0940e4af899d9e5d72",
"text": "Smart health devices monitor certain health parameters, are connected to an Internet service, and target primarily a lay consumer seeking a healthy lifestyle rather than the medical expert or the chronically ill person. These devices offer tremendous opportunities for wellbeing and self-management of health. This department reviews smart health devices from a pervasive computing perspective, discussing various devices and their functionality, limitations, and potential.",
"title": ""
}
] | [
{
"docid": "b2058a09b3e83bb864cb238e066c8afb",
"text": "The ability to reason with natural language is a fundamental prerequisite for many NLP tasks such as information extraction, machine translation and question answering. To quantify this ability, systems are commonly tested whether they can recognize textual entailment, i.e., whether one sentence can be inferred from another one. However, in most NLP applications only single source sentences instead of sentence pairs are available. Hence, we propose a new task that measures how well a model can generate an entailed sentence from a source sentence. We take entailment-pairs of the Stanford Natural Language Inference corpus and train an LSTM with attention. On a manually annotated test set we found that 82% of generated sentences are correct, an improvement of 10.3% over an LSTM baseline. A qualitative analysis shows that this model is not only capable of shortening input sentences, but also inferring new statements via paraphrasing and phrase entailment. We then apply this model recursively to input-output pairs, thereby generating natural language inference chains that can be used to automatically construct an entailment graph from source sentences. Finally, by swapping source and target sentences we can also train a model that given an input sentence invents additional information to generate a new sentence.",
"title": ""
},
{
"docid": "3c4f19544e9cc51d307c6cc9aea63597",
"text": "Math anxiety is a negative affective reaction to situations involving math. Previous work demonstrates that math anxiety can negatively impact math problem solving by creating performance-related worries that disrupt the working memory needed for the task at hand. By leveraging knowledge about the mechanism underlying the math anxiety-performance relationship, we tested the effectiveness of a short expressive writing intervention that has been shown to reduce intrusive thoughts and improve working memory availability. Students (N = 80) varying in math anxiety were asked to sit quietly (control group) prior to completing difficulty-matched math and word problems or to write about their thoughts and feelings regarding the exam they were about to take (expressive writing group). For the control group, high math-anxious individuals (HMAs) performed significantly worse on the math problems than low math-anxious students (LMAs). In the expressive writing group, however, this difference in math performance across HMAs and LMAs was significantly reduced. Among HMAs, the use of words related to anxiety, cause, and insight in their writing was positively related to math performance. Expressive writing boosts the performance of anxious students in math-testing situations.",
"title": ""
},
{
"docid": "36342d65aaa9dff0339f8c1c8cb23f30",
"text": "Recent approaches to Reinforcement Learning (RL) with function approximation include Neural Fitted Q Iteration and the use of Gaussian Processes. They belong to the class of fitted value iteration algorithms, which use a set of support points to fit the value-function in a batch iterative process. These techniques make efficient use of a reduced number of samples by reusing them as needed, and are appropriate for applications where the cost of experiencing a new sample is higher than storing and reusing it, but this is at the expense of increasing the computational effort, since these algorithms are not incremental. On the other hand, non-parametric models for function approximation, like Gaussian Processes, are preferred against parametric ones, due to their greater flexibility. A further advantage of using Gaussian Processes for function approximation is that they allow to quantify the uncertainty of the estimation at each point. In this paper, we propose a new approach for RL in continuous domains based on Probability Density Estimations. Our method combines the best features of the previous methods: it is non-parametric and provides an estimation of the variance of the approximated function at any point of the domain. In addition, our method is simple, incremental, and computationally efficient. All these features make this approach more appealing than Gaussian Processes and fitted value iteration algorithms in general.",
"title": ""
},
{
"docid": "29e500aa57f82d63596ae13639d46cbf",
"text": "In this paper we present a intrusion detection module capable of detecting malicious network traffic in a SCADA (Supervisory Control and Data Acquisition) system. Malicious data in a SCADA system disrupt its correct functioning and tamper with its normal operation. OCSVM (One-Class Support Vector Machine) is an intrusion detection mechanism that does not need any labeled data for training or any information about the kind of anomaly is expecting for the detection process. This feature makes it ideal for processing SCADA environment data and automate SCADA performance monitoring. The OCSVM module developed is trained by network traces off line and detect anomalies in the system real time. The module is part of an IDS (Intrusion Detection System) system developed under CockpitCI project and communicates with the other parts of the system by the exchange of IDMEF (Intrusion Detection Message Exchange Format) messages that carry information about the source of the incident, the time and a classification of the alarm.",
"title": ""
},
{
"docid": "cbdfd886416664809046ff2e674f4ae1",
"text": "Domain adaptation addresses the problem where data instances of a source domain have different distributions from that of a target domain, which occurs frequently in many real life scenarios. This work focuses on unsupervised domain adaptation, where labeled data are only available in the source domain. We propose to interpolate subspaces through dictionary learning to link the source and target domains. These subspaces are able to capture the intrinsic domain shift and form a shared feature representation for cross domain recognition. Further, we introduce a quantitative measure to characterize the shift between two domains, which enables us to select the optimal domain to adapt to the given multiple source domains. We present experiments on face recognition across pose, illumination and blur variations, cross dataset object recognition, and report improved performance over the state of the art.",
"title": ""
},
{
"docid": "cee3c61474bf14158d4abf0c794a9c2a",
"text": "This course will focus on describing techniques for handling datasets larger than main memory in scientific visualization and computer graphics. Recently, several external memory techniques have been developed for a wide variety of graphics and visualization problems, including surface simplification, volume rendering, isosurface generation, ray tracing, surface reconstruction, and so on. This work has had significant impact given that in recent years there has been a rapid increase in the raw size of datasets. Several technological trends are contributing to this, such as the development of high-resolution 3D scanners, and the need to visualize ASCI-size (Accelerated Strategic Computing Initiative) datasets. Another important push for this kind of technology is the growing speed gap between main memory and caches, such a gap penalizes algorithms which do not optimize for coherence of access. Because of these reasons, much research in computer graphics focuses on developing out-of-core (and often cache-friendly) techniques. This course reviews fundamental issues, current problems, and unresolved solutions, and presents an in-depth study of external memory algorithms developed in recent years. Its goal is to provide students and graphics researchers and professionals with an effective knowledge of current techniques, as well as the foundation to develop novel techniques on their own. Schedule (tentative) 5 min Introduction to the course Silva 45 min Overview of external memory algorithms Chiang 40 min Out-of-core scientific visualization Silva",
"title": ""
},
{
"docid": "947d4c60427377bcb466fe1393c5474c",
"text": "This paper presents a single BCD technology platform with high performance power devices at a wide range of operating voltages. The platform offers 6 V to 70 V LDMOS devices. All devices offer best-in-class specific on-resistance of 20 to 40 % lower than that of the state-of-the-art IC-based LDMOS devices and robustness better than the square SOA (safe-operating-area). Fully isolated LDMOS devices, in which independent bias is capable for circuit flexibility, demonstrate superior specific on-resistance (e.g. 11.9 mΩ-mm2 for breakdown voltage of 39 V). Moreover, the unusual sudden current enhancement appeared in the ID-VD saturation region of most of the high voltage LDMOS devices is significantly suppressed.",
"title": ""
},
{
"docid": "413df06d6ba695aa5baa13ea0913c6e6",
"text": "Time stamping is a technique used to prove the existence of certain digital data prior to a specific point in time. With the recent development of electronic commerce, time stamping is now widely recognized as an important technique used to ensure the integrity of digital data for a long time period. Various time stamping schemes and services have been proposed. When one uses a certain time stamping service, he should confirm in advance that its security level sufficiently meets his security requirements. However, time stamping schemes are generally so complicated that it is not easy to evaluate their security levels accurately. It is important for users to have a good grasp of current studies of time stamping schemes and to make use of such studies to select an appropriate time stamping service. Une and Matsumoto [2000], [2001a], [2001b] and [2002] have proposed a method of classifying time stamping schemes and evaluating their security systematically. Their papers have clarified the objectives, functions and entities involved in time stamping schemes and have discussed the conditions sufficient to detect the alteration of a time stamp in each scheme. This paper explains existing problems regarding the security evaluation of time stamping schemes and the results of Une and Matsumoto [2000], [2001a], [2001b] and [2002]. It also applies their results to some existing time stamping schemes and indicates possible directions of further research into time stamping schemes.",
"title": ""
},
{
"docid": "269cff08201fd7815e3ea2c9a786d38b",
"text": "In this paper, we are interested in developing compositional models to explicit representing pose, parts and attributes and tackling the tasks of attribute recognition, pose estimation and part localization jointly. This is different from the recent trend of using CNN-based approaches for training and testing on these tasks separately with a large amount of data. Conventional attribute models typically use a large number of region-based attribute classifiers on parts of pre-trained pose estimator without explicitly detecting the object or its parts, or considering the correlations between attributes. In contrast, our approach jointly represents both the object parts and their semantic attributes within a unified compositional hierarchy. We apply our attributed grammar model to the task of human parsing by simultaneously performing part localization and attribute recognition. We show our modeling helps performance improvements on pose-estimation task and also outperforms on other existing methods on attribute prediction task.",
"title": ""
},
{
"docid": "0ff8c4799b62c70ef6b7d70640f1a931",
"text": "Using on-chip interconnection networks in place of ad-hoc glo-bal wiring structures the top level wires on a chip and facilitates modular design. With this approach, system modules (processors, memories, peripherals, etc...) communicate by sending packets to one another over the network. The structured network wiring gives well-controlled electrical parameters that eliminate timing iterations and enable the use of high-performance circuits to reduce latency and increase bandwidth. The area overhead required to implement an on-chip network is modest, we estimate 6.6%. This paper introduces the concept of on-chip networks, sketches a simple network, and discusses some challenges in the architecture and design of these networks.",
"title": ""
},
{
"docid": "cbfdea54abb1e4c1234ca44ca6913220",
"text": "Seeds of chickpea (Cicer arietinum L.) were exposed in batches to static magnetic fields of strength from 0 to 250 mT in steps of 50 mT for 1-4 h in steps of 1 h for all fields. Results showed that magnetic field application enhanced seed performance in terms of laboratory germination, speed of germination, seedling length and seedling dry weight significantly compared to unexposed control. However, the response varied with field strength and duration of exposure without any particular trend. Among the various combinations of field strength and duration, 50 mT for 2 h, 100 mT for 1 h and 150 mT for 2 h exposures gave best results. Exposure of seeds to these three magnetic fields improved seed coat membrane integrity as it reduced the electrical conductivity of seed leachate. In soil, seeds exposed to these three treatments produced significantly increased seedling dry weights of 1-month-old plants. The root characteristics of the plants showed dramatic increase in root length, root surface area and root volume. The improved functional root parameters suggest that magnetically treated chickpea seeds may perform better under rainfed (un-irrigated) conditions where there is a restrictive soil moisture regime.",
"title": ""
},
{
"docid": "8fca64bb24d9adc445fec504ee8efa5a",
"text": "In this paper, the permeation properties of three types of liquids into HTV silicone rubber with different Alumina Tri-hydrate (ATH) contents had been investigated by weight gain experiments. The influence of differing exposure conditions on the diffusion into silicone rubber, in particular the effect of solution type, solution concentration, and test temperature were explored. Experimental results indicated that the liquids permeation into silicone rubber obeyed anomalous diffusion ways instead of the Fick diffusion model. Moreover, higher temperature would accelerate the permeation process, and silicone rubber with higher ATH content absorbed more liquids than that with lower ATH content. Furthermore, the material properties of silicone rubber before and after liquid permeation were examined using Fourier infrared spectroscopy (FTIR), thermal gravimetric analysis (TGA) and scanning electron microscopy (SEM), respectively. The permeation mechanisms and process were discussed in depth by combining the weight gain experiment results and the material properties analyses.",
"title": ""
},
{
"docid": "2e510f3f8055b4936aadf502766e3e0d",
"text": "Process mining techniques have proven to be a valuable tool for analyzing the execution of business processes. They rely on logs that identify events at an activity level, i.e., most process mining techniques assume that the information system explicitly supports the notion of activities/tasks. This is often not the case and only low-level events are being supported and logged. For example, users may provide different pieces of data which together constitute a single activity. The technique introduced in this paper uses clustering algorithms to derive activity logs from lower-level data modification logs, as produced by virtually every information system. This approach was implemented in the context of the ProM framework and its goal is to widen the scope of processes that can be analyzed using existing process mining techniques.",
"title": ""
},
{
"docid": "ac2009434ea592577cdcdbfb51e3213c",
"text": "Pair-wise ranking methods have been widely used in recommender systems to deal with implicit feedback. They attempt to discriminate between a handful of observed items and the large set of unobserved items. In these approaches, however, user preferences and item characteristics cannot be estimated reliably due to overfitting given highly sparse data. To alleviate this problem, in this paper, we propose a novel hierarchical Bayesian framework which incorporates “bag-ofwords” type meta-data on items into pair-wise ranking models for one-class collaborative filtering. The main idea of our method lies in extending the pair-wise ranking with a probabilistic topic modeling. Instead of regularizing item factors through a zero-mean Gaussian prior, our method introduces item-specific topic proportions as priors for item factors. As a by-product, interpretable latent factors for users and items may help explain recommendations in some applications. We conduct an experimental study on a real and publicly available dataset, and the results show that our algorithm is effective in providing accurate recommendation and interpreting user factors and item factors.",
"title": ""
},
{
"docid": "edb7adc3e665aa2126be1849431c9d7f",
"text": "This study evaluated the exploitation of unprocessed agricultural discards in the form of fresh vegetable leaves as a diet for the sea urchin Paracentrotus lividus through the assessment of their effects on gonad yield and quality. A stock of wild-caught P. lividus was fed on discarded leaves from three different species (Beta vulgaris, Brassica oleracea, and Lactuca sativa) and the macroalga Ulva lactuca for 3 months under controlled conditions. At the beginning and end of the experiment, total and gonad weight were measured, while gonad and diet total carbon (C%), nitrogen (N%), δ13C, δ15N, carbohydrates, lipids, and proteins were analyzed. The results showed that agricultural discards provided for the maintenance of gonad index and nutritional value (carbohydrate, lipid, and protein content) of initial specimens. L. sativa also improved gonadic color. The results of this study suggest that fresh vegetable discards may be successfully used in the preparation of more balanced diets for sea urchin aquaculture. The use of agricultural discards in prepared diets offers a number of advantages, including an abundant resource, the recycling of discards into new organic matter, and reduced pressure on marine organisms (i.e., macroalgae) in the production of food for cultured organisms.",
"title": ""
},
{
"docid": "03fa5f5f6b6f307fc968a2b543e331a1",
"text": "In recent years, several noteworthy large, cross-domain, and openly available knowledge graphs (KGs) have been created. These include DBpedia, Freebase, OpenCyc, Wikidata, and YAGO. Although extensively in use, these KGs have not been subject to an in-depth comparison so far. In this survey, we provide data quality criteria according to which KGs can be analyzed and analyze and compare the above mentioned KGs. Furthermore, we propose a framework for finding the most suitable KG for a given setting.",
"title": ""
},
{
"docid": "6347b642cec08bf062f6e5594f805bd3",
"text": "Using a multimethod approach, the authors conducted 4 studies to test life span hypotheses about goal orientations across adulthood. Confirming expectations, in Studies 1 and 2 younger adults reported a primary growth orientation in their goals, whereas older adults reported a stronger orientation toward maintenance and loss prevention. Orientation toward prevention of loss correlated negatively with well-being in younger adults. In older adults, orientation toward maintenance was positively associated with well-being. Studies 3 and 4 extend findings of a self-reported shift in goal orientation to the level of behavioral choice involving cognitive and physical fitness goals. Studies 3 and 4 also examine the role of expected resource demands. The shift in goal orientation is discussed as an adaptive mechanism to manage changing opportunities and constraints across adulthood.",
"title": ""
},
{
"docid": "bb11b0de8915b6f4811cc76dffd6d8b2",
"text": "In this work we introduced SnooperTrack, an algorithm for the automatic detection and tracking of text objects — such as store names, traffic signs, license plates, and advertisements — in videos of outdoor scenes. The purpose is to improve the performances of text detection process in still images by taking advantage of the temporal coherence in videos. We first propose an efficient tracking algorithm using particle filtering framework with original region descriptors. The second contribution is our strategy to merge tracked regions and new detections. We also propose an improved version of our previously published text detection algorithm in still images. Tests indicate that SnooperTrack is fast, robust, enable false positive suppression, and achieved great performances in complex videos of outdoor scenes.",
"title": ""
},
{
"docid": "ef5cfd6c5eaf48805e39a9eb454aa7b9",
"text": "Neural networks are artificial learning systems. For more than two decades, they have help for detecting hostile behaviors in a computer system. This review describes those systems and theirs limits. It defines and gives neural networks characteristics. It also itemizes neural networks which are used in intrusion detection systems. The state of the art on IDS made from neural networks is reviewed. In this paper, we also make a taxonomy and a comparison of neural networks intrusion detection systems. We end this review with a set of remarks and future works that can be done in order to improve the systems that have been presented. This work is the result of a meticulous scan of the literature.",
"title": ""
},
{
"docid": "59e2564e565ead0bc36f9f691f4f70f3",
"text": "INTRODUCTION In recent years “big data” has become something of a buzzword in business, computer science, information studies, information systems, statistics, and many other fields. As technology continues to advance, we constantly generate an ever-increasing amount of data. This growth does not differentiate between individuals and businesses, private or public sectors, institutions of learning and commercial entities. It is nigh universal and therefore warrants further study.",
"title": ""
}
] | scidocsrr |
513882e9992781626e656c002f99dbdf | Rectangular Dielectric Resonator Antenna Array for 28 GHz Applications | [
{
"docid": "364eb800261105453f36b005ba1faf68",
"text": "This article presents empirically-based large-scale propagation path loss models for fifth-generation cellular network planning in the millimeter-wave spectrum, based on real-world measurements at 28 GHz and 38 GHz in New York City and Austin, Texas, respectively. We consider industry-standard path loss models used for today's microwave bands, and modify them to fit the propagation data measured in these millimeter-wave bands for cellular planning. Network simulations with the proposed models using a commercial planning tool show that roughly three times more base stations are required to accommodate 5G networks (cell radii up to 200 m) compared to existing 3G and 4G systems (cell radii of 500 m to 1 km) when performing path loss simulations based on arbitrary pointing angles of directional antennas. However, when directional antennas are pointed in the single best directions at the base station and mobile, coverage range is substantially improved with little increase in interference, thereby reducing the required number of 5G base stations. Capacity gains for random pointing angles are shown to be 20 times greater than today's fourth-generation Long Term Evolution networks, and can be further improved when using directional antennas pointed in the strongest transmit and receive directions with the help of beam combining techniques.",
"title": ""
}
] | [
{
"docid": "5f5828952aa0a0a95e348a0c0b2296fb",
"text": "Indoor positioning has grasped great attention in recent years. A number of efforts have been exerted to achieve high positioning accuracy. However, there exists no technology that proves its efficacy in various situations. In this paper, we propose a novel positioning method based on fusing trilateration and dead reckoning. We employ Kalman filtering as a position fusion algorithm. Moreover, we adopt an Android device with Bluetooth Low Energy modules as the communication platform to avoid excessive energy consumption and to improve the stability of the received signal strength. To further improve the positioning accuracy, we take the environmental context information into account while generating the position fixes. Extensive experiments in a testbed are conducted to examine the performance of three approaches: trilateration, dead reckoning and the fusion method. Additionally, the influence of the knowledge of the environmental context is also examined. Finally, our proposed fusion method outperforms both trilateration and dead reckoning in terms of accuracy: experimental results show that the Kalman-based fusion, for our settings, achieves a positioning accuracy of less than one meter.",
"title": ""
},
{
"docid": "f90efcef80233888fb8c218d1e5365a6",
"text": "BACKGROUND\nMany low- and middle-income countries are undergoing a nutrition transition associated with rapid social and economic transitions. We explore the coexistence of over and under- nutrition at the neighborhood and household level, in an urban poor setting in Nairobi, Kenya.\n\n\nMETHODS\nData were collected in 2010 on a cohort of children aged under five years born between 2006 and 2010. Anthropometric measurements of the children and their mothers were taken. Additionally, dietary intake, physical activity, and anthropometric measurements were collected from a stratified random sample of adults aged 18 years and older through a separate cross-sectional study conducted between 2008 and 2009 in the same setting. Proportions of stunting, underweight, wasting and overweight/obesity were dettermined in children, while proportions of underweight and overweight/obesity were determined in adults.\n\n\nRESULTS\nOf the 3335 children included in the analyses with a total of 6750 visits, 46% (51% boys, 40% girls) were stunted, 11% (13% boys, 9% girls) were underweight, 2.5% (3% boys, 2% girls) were wasted, while 9% of boys and girls were overweight/obese respectively. Among their mothers, 7.5% were underweight while 32% were overweight/obese. A large proportion (43% and 37%%) of overweight and obese mothers respectively had stunted children. Among the 5190 adults included in the analyses, 9% (6% female, 11% male) were underweight, and 22% (35% female, 13% male) were overweight/obese.\n\n\nCONCLUSION\nThe findings confirm an existing double burden of malnutrition in this setting, characterized by a high prevalence of undernutrition particularly stunting early in life, with high levels of overweight/obesity in adulthood, particularly among women. In the context of a rapid increase in urban population, particularly in urban poor settings, this calls for urgent action. Multisectoral action may work best given the complex nature of prevailing circumstances in urban poor settings. Further research is needed to understand the pathways to this coexistence, and to test feasibility and effectiveness of context-specific interventions to curb associated health risks.",
"title": ""
},
{
"docid": "e04cccfd59c056678e39fc4aed0eaa2b",
"text": "BACKGROUND\nBreast cancer is by far the most frequent cancer of women. However the preventive measures for such problem are probably less than expected. The objectives of this study are to assess breast cancer knowledge and attitudes and factors associated with the practice of breast self examination (BSE) among female teachers of Saudi Arabia.\n\n\nPATIENTS AND METHODS\nWe conducted a cross-sectional survey of teachers working in female schools in Buraidah, Saudi Arabia using a self-administered questionnaire to investigate participants' knowledge about the risk factors of breast cancer, their attitudes and screening behaviors. A sample of 376 female teachers was randomly selected. Participants lived in urban areas, and had an average age of 34.7 ±5.4 years.\n\n\nRESULTS\nMore than half of the women showed a limited knowledge level. Among participants, the most frequently reported risk factors were non-breast feeding and the use of female sex hormones. The printed media was the most common source of knowledge. Logistic regression analysis revealed that high income was the most significant predictor of better knowledge level. Knowing a non-relative case with breast cancer and having a high knowledge level were identified as the significant predictors for practicing BSE.\n\n\nCONCLUSIONS\nThe study points to the insufficient knowledge of female teachers about breast cancer and identified the negative influence of low knowledge on the practice of BSE. Accordingly, relevant educational programs to improve the knowledge level of women regarding breast cancer are needed.",
"title": ""
},
{
"docid": "0872a229806a1055ec6e42d7a36ef626",
"text": "Attribute selection (AS) refers to the problem of selecting those input attributes or features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition and signal processing. Unlike other dimensionality reduction methods, attribute selectors preserve the original meaning of the attributes after reduction. This has found application in tasks that involve datasets containing huge numbers of attributes (in the order of tens of thousands) which, for some learning algorithms, might be impossible to process further. Recent examples include text processing and web content classification. AS techniques have also been applied to small and medium-sized datasets in order to locate the most informative attributes for later use. One of the many successful applications of rough set theory has been to this area. The rough set ideology of using only the supplied data and no other information has many benefits in AS, where most other methods require supplementary knowledge. However, the main limitation of rough set-based attribute selection in the literature is the restrictive requirement that all data is discrete. In classical rough set theory, it is not possible to consider real-valued or noisy data. This paper investigates a novel approach based on fuzzy-rough sets, fuzzy rough feature selection (FRFS), that addresses these problems and retains dataset semantics. FRFS is applied to two challenging domains where a feature reducing step is important; namely, web content classification and complex systems monitoring. The utility of this approach is demonstrated and is compared empirically with several dimensionality reducers. In the experimental studies, FRFS is shown to equal or improve classification accuracy when compared to the results from unreduced data. Classifiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduction outperform those that employ more attributes returned by the existing crisp rough reduction method. In addition, it is shown that FRFS is more powerful than the other AS techniques in the comparative study",
"title": ""
},
{
"docid": "8492ba0660b06ca35ab3f4e96f3a33c3",
"text": "Young men who have sex with men (YMSM) are increasingly using mobile smartphone applications (“apps”), such as Grindr, to meet sex partners. A probability sample of 195 Grindr-using YMSM in Southern California were administered an anonymous online survey to assess patterns of and motivations for Grindr use in order to inform development and tailoring of smartphone-based HIV prevention for YMSM. The number one reason for using Grindr (29 %) was to meet “hook ups.” Among those participants who used both Grindr and online dating sites, a statistically significantly greater percentage used online dating sites for “hook ups” (42 %) compared to Grindr (30 %). Seventy percent of YMSM expressed a willingness to participate in a smartphone app-based HIV prevention program. Development and testing of smartphone apps for HIV prevention delivery has the potential to engage YMSM in HIV prevention programming, which can be tailored based on use patterns and motivations for use. Los hombres que mantienen relaciones sexuales con hombres (YMSM por las siglas en inglés de Young Men Who Have Sex with Men) están utilizando más y más aplicaciones para teléfonos inteligentes (smartphones), como Grindr, para encontrar parejas sexuales. En el Sur de California, se administró de forma anónima un sondeo en internet a una muestra de probabilidad de 195 YMSM usuarios de Grindr, para evaluar los patrones y motivaciones del uso de Grindr, con el fin de utilizar esta información para el desarrollo y personalización de prevención del VIH entre YMSM con base en teléfonos inteligentes. La principal razón para utilizar Grindr (29 %) es para buscar encuentros sexuales casuales (hook-ups). Entre los participantes que utilizan tanto Grindr como otro sitios de citas online, un mayor porcentaje estadísticamente significativo utilizó los sitios de citas online para encuentros casuales sexuales (42 %) comparado con Grindr (30 %). Un setenta porciento de los YMSM expresó su disposición para participar en programas de prevención del VIH con base en teléfonos inteligentes. El desarrollo y evaluación de aplicaciones para teléfonos inteligentes para el suministro de prevención del VIH tiene el potencial de involucrar a los YMSM en la programación de la prevención del VIH, que puede ser adaptada según los patrones y motivaciones de uso.",
"title": ""
},
{
"docid": "32b292c3ea5c95411a5e67d664d6ce30",
"text": "Many difficult combinatorial optimization problems have been modeled as static problems. However, in practice, many problems are dynamic and changing, while some decisions have to be made before all the design data are known. For example, in the Dynamic Vehicle Routing Problem (DVRP), new customer orders appear over time, and new routes must be reconfigured while executing the current solution. Montemanni et al. [1] considered a DVRP as an extension to the standard vehicle routing problem (VRP) by decomposing a DVRP as a sequence of static VRPs, and then solving them with an ant colony system (ACS) algorithm. This paper presents a genetic algorithm (GA) methodology for providing solutions for the DVRP model employed in [1]. The effectiveness of the proposed GA is evaluated using a set of benchmarks found in the literature. Compared with a tabu search approach implemented herein and the aforementioned ACS, the proposed GA methodology performs better in minimizing travel costs.",
"title": ""
},
{
"docid": "8a8c1099dfe0cf45746f11da7d6923d8",
"text": "The future of procedural content generation (PCG) lies beyond the dominant motivations of “replayability” and creating large environments for players to explore. This paper explores both the past and potential future for PCG, identifying five major lenses through which we can view PCG and its role in a game: data vs. process intensiveness, the interactive extent of the content, who has control over the generator, how many players interact with it, and the aesthetic purpose for PCG being used in the game. Using these lenses, the paper proposes several new research directions for PCG that require both deep technical research and innovative game design.",
"title": ""
},
{
"docid": "3eeaf56aaf9dda0f2b16c1c46f6c1c75",
"text": "In satellite earth station antenna systems there is an increasing demand for complex single aperture, multi-function and multi-frequency band capable feed systems. In this work, a multi band feed system (6/12 GHz) is described which employs quadrature junctions (QJ) and supports transmit and receive functionality in the C and Ku bands respectively. This feed system is designed for a 16.4 m diameter shaped cassegrain antenna. It is a single aperture, 4 port system with transmit capability in circular polarization (CP) mode over the 6.625-6.69 GHz band and receive in the linear polarization (LP) mode over the 12.1-12.3 GHz band",
"title": ""
},
{
"docid": "c744354fcc6115a83c916dcc71b381f4",
"text": "The spread of false rumours during emergencies can jeopardise the well-being of citizens as they are monitoring the stream of news from social media to stay abreast of the latest updates. In this paper, we describe the methodology we have developed within the PHEME project for the collection and sampling of conversational threads, as well as the tool we have developed to facilitate the annotation of these threads so as to identify rumourous ones. We describe the annotation task conducted on threads collected during the 2014 Ferguson unrest and we present and analyse our findings. Our results show that we can collect effectively social media rumours and identify multiple rumours associated with a range of stories that would have been hard to identify by relying on existing techniques that need manual input of rumour-specific keywords.",
"title": ""
},
{
"docid": "e227e21d9b0523fdff82ca898fea0403",
"text": "As computer games become more complex and consumers demand more sophisticated computer controlled agents, developers are required to place a greater emphasis on the artificial intelligence aspects of their games. One source of sophisticated AI techniques is the artificial intelligence research community. This paper discusses recent efforts by our group at the University of Michigan Artificial Intelligence Lab to apply state of the art artificial intelligence techniques to computer games. Our experience developing intelligent air combat agents for DARPA training exercises, described in John Laird's lecture at the 1998 Computer Game Developer's Conference, suggested that many principles and techniques from the research community are applicable to games. A more recent project, called the Soar/Games project, has followed up on this by developing agents for computer games, including Quake II and Descent 3. The result of these two research efforts is a partially implemented design of an artificial intelligence engine for games based on well established AI systems and techniques.",
"title": ""
},
{
"docid": "debe25489a0176c48c07d1f2d5b8513e",
"text": "In order to formulate a high-level understanding of driver behavior from massive naturalistic driving data, an effective approach is needed to automatically process or segregate data into low-level maneuvers. Besides traditional computer vision processing, this study addresses the lane-change detection problem by using vehicle dynamic signals (steering angle and vehicle speed) extracted from the CAN-bus, which is collected with 58 drivers around Dallas, TX area. After reviewing the literature, this study proposes a machine learning-based segmentation and classification algorithm, which is stratified into three stages. The first stage is preprocessing and prefiltering, which is intended to reduce noise and remove clear left and right turning events. Second, a spectral time-frequency analysis segmentation approach is employed to generalize all potential time-variant lane-change and lane-keeping candidates. The final stage compares two possible classification methods—1) dynamic time warping feature with k -nearest neighbor classifier and 2) hidden state sequence prediction with a combined hidden Markov model. The overall optimal classification accuracy can be obtained at 80.36% for lane-change-left and 83.22% for lane-change-right. The effectiveness and issues of failures are also discussed. With the availability of future large-scale naturalistic driving data, such as SHRP2, this proposed effective lane-change detection approach can further contribute to characterize both automatic route recognition as well as distracted driving state analysis.",
"title": ""
},
{
"docid": "b1b1af81e84e1f79a0193773a22199d4",
"text": "Layered multicast is an efficient technique to deliver video to heterogeneous receivers over wired and wireless networks. In this paper, we consider such a multicast system in which the server adapts the bandwidth and forward-error correction code (FEC) of each layer so as to maximize the overall video quality, given the heterogeneous client characteristics in terms of their end-to-end bandwidth, packet drop rate over the wired network, and bit-error rate in the wireless hop. In terms of FECs, we also study the value of a gateway which “transcodes” packet-level FECs to byte-level FECs before forwarding packets from the wired network to the wireless clients. We present an analysis of the system, propose an efficient algorithm on FEC allocation for the base layer, and formulate a dynamic program with a fast and accurate approximation for the joint bandwidth and FEC allocation of the enhancement layers. Our results show that a transcoding gateway performs only slightly better than the nontranscoding one in terms of end-to-end loss rate, and our allocation is effective in terms of FEC parity and bandwidth served to each user.",
"title": ""
},
{
"docid": "3dfb419706ae85d232753a085dc145f7",
"text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.",
"title": ""
},
{
"docid": "479fbdcd776904e9ba20fd95b4acb267",
"text": "Tall building developments have been rapidly increasing worldwide. This paper reviews the evolution of tall building’s structural systems and the technological driving force behind tall building developments. For the primary structural systems, a new classification – interior structures and exterior structures – is presented. While most representative structural systems for tall buildings are discussed, the emphasis in this review paper is on current trends such as outrigger systems and diagrid structures. Auxiliary damping systems controlling building motion are also discussed. Further, contemporary “out-of-the-box” architectural design trends, such as aerodynamic and twisted forms, which directly or indirectly affect the structural performance of tall buildings, are reviewed. Finally, the future of structural developments in tall buildings is envisioned briefly.",
"title": ""
},
{
"docid": "bf5874dc1fc1c968d7c41eb573d8d04a",
"text": "As creativity is increasingly recognised as a vital component of entrepreneurship, researchers and educators struggle to reform enterprise pedagogy. To help in this effort, we use a personality test and open-ended interviews to explore creativity between two groups of entrepreneurship masters’ students: one at a business school and one at an engineering school. The findings indicate that both groups had high creative potential, but that engineering students channelled this into practical and incremental efforts whereas the business students were more speculative and had a clearer market focus. The findings are drawn on to make some suggestions for entrepreneurship education.",
"title": ""
},
{
"docid": "5ce00014f84277aca0a4b7dfefc01cbb",
"text": "The design of a planar dual-band wide-scan phased array is presented. The array uses novel dual-band comb-slot-loaded patch elements supporting two separate bands with a frequency ratio of 1.4:1. The antenna maintains consistent radiation patterns and incorporates a feeding configuration providing good bandwidths in both bands. The design has been experimentally validated with an X-band planar 9 × 9 array. The array supports wide-angle scanning up to a maximum of 60 ° and 50 ° at the low and high frequency bands respectively.",
"title": ""
},
{
"docid": "e9582d921b783a378e91c7b5ddaf9d16",
"text": "Pneumatic soft actuators produce flexion and meet the new needs of collaborative robotics, which is rapidly emerging in the industry landscape 4.0. The soft actuators are not only aimed at industrial progress, but their application ranges in the field of medicine and rehabilitation. Safety and reliability are the main requirements for coexistence and human-robot interaction; such requirements, together with the versatility and lightness, are the precious advantages that is offered by this new category of actuators. The objective is to develop an actuator with high compliance, low cost, high versatility and easy to produce, aimed at the realization of the fingers of a robotic hand that can faithfully reproduce the motion of a real hand. The proposed actuator is equipped with an intrinsic compliance thanks to the hyper-elastic silicone rubber used for its realization; the bending is allowed by the high compliance of the silicone and by a square-meshed gauze which contains the expansion and guides the movement through appropriate cuts in correspondence of the joints. A numerical model of the actuator is developed and an optimal configuration of the five fingers of the hand is achieved; finally, the index finger is built, on which the experimental validation tests are carried out.",
"title": ""
},
{
"docid": "81f9a52b6834095cd7be70b39af0e7f0",
"text": "In this paper we present BatchDB, an in-memory database engine designed for hybrid OLTP and OLAP workloads. BatchDB achieves good performance, provides a high level of data freshness, and minimizes load interaction between the transactional and analytical engines, thus enabling real time analysis over fresh data under tight SLAs for both OLTP and OLAP workloads.\n BatchDB relies on primary-secondary replication with dedicated replicas, each optimized for a particular workload type (OLTP, OLAP), and a light-weight propagation of transactional updates. The evaluation shows that for standard TPC-C and TPC-H benchmarks, BatchDB can achieve competitive performance to specialized engines for the corresponding transactional and analytical workloads, while providing a level of performance isolation and predictable runtime for hybrid workload mixes (OLTP+OLAP) otherwise unmet by existing solutions.",
"title": ""
},
{
"docid": "3eec1e9abcb677a4bc8f054fa8827f4f",
"text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.",
"title": ""
},
{
"docid": "241f5a88f53c929cc11ce0edce191704",
"text": "Enabled by mobile and wearable technology, personal health data delivers immense and increasing value for healthcare, benefiting both care providers and medical research. The secure and convenient sharing of personal health data is crucial to the improvement of the interaction and collaboration of the healthcare industry. Faced with the potential privacy issues and vulnerabilities existing in current personal health data storage and sharing systems, as well as the concept of self-sovereign data ownership, we propose an innovative user-centric health data sharing solution by utilizing a decentralized and permissioned blockchain to protect privacy using channel formation scheme and enhance the identity management using the membership service supported by the blockchain. A mobile application is deployed to collect health data from personal wearable devices, manual input, and medical devices, and synchronize data to the cloud for data sharing with healthcare providers and health insurance companies. To preserve the integrity of health data, within each record, a proof of integrity and validation is permanently retrievable from cloud database and is anchored to the blockchain network. Moreover, for scalable and performance considerations, we adopt a tree-based data processing and batching method to handle large data sets of personal health data collected and uploaded by the mobile platform.",
"title": ""
}
] | scidocsrr |
4c5c9a90a4890c72422be643dbd864ce | Operation of Compressor and Electronic Expansion Valve via Different Controllers | [
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
}
] | [
{
"docid": "fb11b937a3c07fd4b76cda1ed1eadc07",
"text": "Depth information plays an important role in a variety of applications, including manufacturing, medical imaging, computer vision, graphics, and virtual/augmented reality (VR/AR). Depth sensing has thus attracted sustained attention from both academia and industry communities for decades. Mainstream depth cameras can be divided into three categories: stereo, time of flight (ToF), and structured light. Stereo cameras require no active illumination and can be used outdoors, but they are fragile for homogeneous surfaces. Recently, off-the-shelf light field cameras have demonstrated improved depth estimation capability with a multiview stereo configuration. ToF cameras operate at a high frame rate and fit time-critical scenarios well, but they are susceptible to noise and limited to low resolution [3]. Structured light cameras can produce high-resolution, high-accuracy depth, provided that a number of patterns are sequentially used. Due to its promising and reliable performance, the structured light approach has been widely adopted for three-dimensional (3-D) scanning purposes. However, achieving real-time depth with structured light either requires highspeed (and thus expensive) hardware or sacrifices depth resolution and accuracy by using a single pattern instead.",
"title": ""
},
{
"docid": "af628819a5392543266668b94c579a96",
"text": "Elephantopus scaber is an ethnomedicinal plant used by the Zhuang people in Southwest China to treat headaches, colds, diarrhea, hepatitis, and bronchitis. A new δ -truxinate derivative, ethyl, methyl 3,4,3',4'-tetrahydroxy- δ -truxinate (1), was isolated from the ethyl acetate extract of the entire plant, along with 4 known compounds. The antioxidant activity of these 5 compounds was determined by ABTS radical scavenging assay. Compound 1 was also tested for its cytotoxicity effect against HepG2 by MTT assay (IC50 = 60 μ M), and its potential anti-inflammatory, antibiotic, and antitumor bioactivities were predicted using target fishing method software.",
"title": ""
},
{
"docid": "bdefc8bcd92aefe966d4fcd98ab1fdbb",
"text": "The automatic identification system (AIS) tracks vessel movement by means of electronic exchange of navigation data between vessels, with onboard transceiver, terrestrial, and/or satellite base stations. The gathered data contain a wealth of information useful for maritime safety, security, and efficiency. Because of the close relationship between data and methodology in marine data mining and the importance of both of them in marine intelligence research, this paper surveys AIS data sources and relevant aspects of navigation in which such data are or could be exploited for safety of seafaring, namely traffic anomaly detection, route estimation, collision prediction, and path planning.",
"title": ""
},
{
"docid": "f14272db4779239dc7d392ef7dfac52d",
"text": "3 The Rotating Calipers Algorithm 3 3.1 Computing the Initial Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.2 Updating the Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.1 Distinct Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.2 Duplicate Supporting Vertices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 3.2.3 Multiple Polygon Edges Attain Minimum Angle . . . . . . . . . . . . . . . . . . . . . 8 3.2.4 The General Update Step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10",
"title": ""
},
{
"docid": "19d4662287a5c3ce1cef85fa601b74ba",
"text": "This paper compares two approaches in identifying outliers in multivariate datasets; Mahalanobis distance (MD) and robust distance (RD). MD has been known suffering from masking and swamping effects and RD is an approach that was developed to overcome problems that arise in MD. There are two purposes of this paper, first is to identify outliers using MD and RD and the second is to show that RD performs better than MD in identifying outliers. An observation is classified as an outlier if MD or RD is larger than a cut-off value. Outlier generating model is used to generate a set of data and MD and RD are computed from this set of data. The results showed that RD can identify outliers better than MD. However, in non-outliers data the performance for both approaches are similar. The results for RD also showed that RD can identify multivariate outliers much better when the number of dimension is large.",
"title": ""
},
{
"docid": "33d530e8e74cd5b5dbfa2035a7608664",
"text": "This paper presents an area-efficient ultra-low-power 32 kHz clock source for low power wireless communication systems using a temperature-compensated charge-pump-based digitally controlled oscillator (DCO). A highly efficient digital calibration method is proposed to achieve frequency stability over process variation and temperature drifts. This calibration method locks the DCO's output frequency to the reference clock of the wireless communication system during its active state. The introduced calibration scheme offers high jitter immunity and short locking periods overcoming frequency calibration errors for typical ultra-low-power DCO's. The circuit area of the proposed ultra-low-power clock source is 100μm × 140μm in a 130nm RF CMOS technology. In measurements the proposed ultra-low-power clock source achieves a frequency stability of 10 ppm/°C from 10 °C to 100 °C for temperature drifts of less than 1 °C/s with 80nW power consumption.",
"title": ""
},
{
"docid": "5b16933905d36ba54ab74743251d7ca7",
"text": "The explosive growth of the user-generated content on the Web has offered a rich data source for mining opinions. However, the large number of diverse review sources challenges the individual users and organizations on how to use the opinion information effectively. Therefore, automated opinion mining and summarization techniques have become increasingly important. Different from previous approaches that have mostly treated product feature and opinion extraction as two independent tasks, we merge them together in a unified process by using probabilistic models. Specifically, we treat the problem of product feature and opinion extraction as a sequence labeling task and adopt Conditional Random Fields models to accomplish it. As part of our work, we develop a computational approach to construct domain specific sentiment lexicon by combining semi-structured reviews with general sentiment lexicon, which helps to identify the sentiment orientations of opinions. Experimental results on two real world datasets show that the proposed method is effective.",
"title": ""
},
{
"docid": "4b5c5b76d7370a82f96f36659cd63850",
"text": "For force control of robot and collision detection with humans, robots that has joint torque sensors have been developed. However, existing torque sensors cannot measure correct torque because of crosstalk error. In order to solve this problem, we proposed a novel torque sensor that can measure the pure torque without crosstalk. The hexaform of the proposed sensor with truss structure increase deformation of the sensor and restoration, and the Wheatstone bridge circuit of strain gauge removes crosstalk error. Sensor performance is verified with FEM analysis.",
"title": ""
},
{
"docid": "f09733894d94052707ed768aea8d26e6",
"text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.",
"title": ""
},
{
"docid": "2ca050b562ed14688dd9d68b454928e0",
"text": "Electronic waste (e-waste) is one of the fastest-growing pollution problems worldwide given the presence if a variety of toxic substances which can contaminate the environment and threaten human health, if disposal protocols are not meticulously managed. This paper presents an overview of toxic substances present in e-waste, their potential environmental and human health impacts together with management strategies currently being used in certain countries. Several tools including life cycle assessment (LCA), material flow analysis (MFA), multi criteria analysis (MCA) and extended producer responsibility (EPR) have been developed to manage e-wastes especially in developed countries. The key to success in terms of e-waste management is to develop eco-design devices, properly collect e-waste, recover and recycle material by safe methods, dispose of e-waste by suitable techniques, forbid the transfer of used electronic devices to developing countries, and raise awareness of the impact of e-waste. No single tool is adequate but together they can complement each other to solve this issue. A national scheme such as EPR is a good policy in solving the growing e-waste problems.",
"title": ""
},
{
"docid": "ced98c32f887001d40e783ab7b294e1a",
"text": "This paper proposes a two-layer High Dynamic Range (HDR) coding scheme using a new tone mapping. Our tone mapping method transforms an HDR image onto a Low Dynamic Range (LDR) image by using a base map that is a smoothed version of the HDR luminance. In our scheme, the HDR image can be reconstructed from the tone mapped LDR image. Our method makes use of this property to realize a two-layer HDR coding by encoding both of the tone mapped LDR image and the base map. This paper validates its effectiveness of our approach through some experiments.",
"title": ""
},
{
"docid": "e40a60aec86433eaac618e6b391e2a57",
"text": "Marine microalgae have been used for a long time as food for humans, such as Arthrospira (formerly, Spirulina), and for animals in aquaculture. The biomass of these microalgae and the compounds they produce have been shown to possess several biological applications with numerous health benefits. The present review puts up-to-date the research on the biological activities and applications of polysaccharides, active biocompounds synthesized by marine unicellular algae, which are, most of the times, released into the surrounding medium (exo- or extracellular polysaccharides, EPS). It goes through the most studied activities of sulphated polysaccharides (sPS) or their derivatives, but also highlights lesser known applications as hypolipidaemic or hypoglycaemic, or as biolubricant agents and drag-reducers. Therefore, the great potentials of sPS from marine microalgae to be used as nutraceuticals, therapeutic agents, cosmetics, or in other areas, such as engineering, are approached in this review.",
"title": ""
},
{
"docid": "390b0dbd01e88fec7f7a4b59cb753978",
"text": "In this paper, we propose a segmentation method based on normalized cut and superpixels. The method relies on color and texture cues for fast computation and efficient use of memory. The method is used for food image segmentation as part of a mobile food record system we have developed for dietary assessment and management. The accurate estimate of nutrients relies on correctly labelled food items and sufficiently well-segmented regions. Our method achieves competitive results using the Berkeley Segmentation Dataset and outperforms some of the most popular techniques in a food image dataset.",
"title": ""
},
{
"docid": "9e5aa162d1eecefe11abe5ecefbc11e3",
"text": "Efficient algorithms for 3D character control in continuous control setting remain an open problem in spite of the remarkable recent advances in the field. We present a sampling-based model-predictive controller that comes in the form of a Monte Carlo tree search (MCTS). The tree search utilizes information from multiple sources including two machine learning models. This allows rapid development of complex skills such as 3D humanoid locomotion with less than a million simulation steps, in less than a minute of computing on a modest personal computer. We demonstrate locomotion of 3D characters with varying topologies under disturbances such as heavy projectile hits and abruptly changing target direction. In this paper we also present a new way to combine information from the various sources such that minimal amount of information is lost. We furthermore extend the neural network, involved in the algorithm, to represent stochastic policies. Our approach yields a robust control algorithm that is easy to use. While learning, the algorithm runs in near real-time, and after learning the sampling budget can be reduced for real-time operation.",
"title": ""
},
{
"docid": "b5df59d926ca4778c306b255d60870a1",
"text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.",
"title": ""
},
{
"docid": "e777bb21d57393a4848fcb04c6d5b913",
"text": "A 2.5 GHz fully integrated voltage controlled oscillator (VCO) for wireless application has been designed in a 0.35μm CMOS technology. A method for compensating the effect of temperature on the carrier oscillation frequency has been presented in this work. We compare also different VCOs topologies in order to select one with low phase noise, low supply sensitivity and large tuning frequency. Good results are obtained with a simple NMOS –Gm VCO. This proposed VCO has a wide operating range from 300 MHz with a good linearity between the output frequency and the control input voltage, with a temperature coefficient of -5 ppm/°C from 20°C to 120°C range. The phase noise is about -135.2dBc/Hz at 1MHz from the carrier with a power consumption of 5mW.",
"title": ""
},
{
"docid": "cb641fc639b86abadec4f85efc226c14",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "29ce7251e5237b0666cef2aee7167126",
"text": "Chinese characters have a huge set of character categories, more than 20, 000 and the number is still increasing as more and more novel characters continue being created. However, the enormous characters can be decomposed into a compact set of about 500 fundamental and structural radicals. This paper introduces a novel radical analysis network (RAN) to recognize printed Chinese characters by identifying radicals and analyzing two-dimensional spatial structures among them. The proposed RAN first extracts visual features from input by employing convolutional neural networks as an encoder. Then a decoder based on recurrent neural networks is employed, aiming at generating captions of Chinese characters by detecting radicals and two-dimensional structures through a spatial attention mechanism. The manner of treating a Chinese character as a composition of radicals rather than a single character class largely reduces the size of vocabulary and enables RAN to possess the ability of recognizing unseen Chinese character classes, namely zero-shot learning.",
"title": ""
},
{
"docid": "33285ad9f7bc6e33b48e3f1e27a1ccc9",
"text": "Information visualization is a very important tool in BigData analytics. BigData, structured and unstructured data which contains images, videos, texts, audio and other forms of data, collected from multiple datasets, is too big, too complex and moves too fast to analyse using traditional methods. This has given rise to two issues; 1) how to reduce multidimensional data without the loss of any data patterns for multiple datasets, 2) how to visualize BigData patterns for analysis. In this paper, we have classified the BigData attributes into `5Ws' data dimensions, and then established a `5Ws' density approach that represents the characteristics of data flow patterns. We use parallel coordinates to display the `5Ws' sending and receiving densities which provide more analytic features for BigData analysis. The experiment shows that this new model with parallel coordinate visualization can be efficiently used for BigData analysis and visualization.",
"title": ""
},
{
"docid": "6844deb3346756b1858778a4cec26098",
"text": "Deep Learning has recently been introduced as a new alternative to perform Side-Channel analysis [1]. Until now, studies have been focused on applying Deep Learning techniques to perform Profiled SideChannel attacks where an attacker has a full control of a profiling device and is able to collect a large amount of traces for different key values in order to characterize the device leakage prior to the attack. In this paper we introduce a new method to apply Deep Learning techniques in a Non-Profiled context, where an attacker can only collect a limited number of side-channel traces for a fixed unknown key value from a closed device. We show that by combining key guesses with observations of Deep Learning metrics, it is possible to recover information about the secret key. The main interest of this method, is that it is possible to use the power of Deep Learning and Neural Networks in a Non-Profiled scenario. We show that it is possible to exploit the translation-invariance property of Convolutional Neural Networks [2] against de-synchronized traces and use Data Augmentation techniques also during Non-Profiled side-channel attacks. Additionally, the present work shows that in some conditions, this method can outperform classic Non-Profiled attacks as Correlation Power Analysis. We also highlight that it is possible to target masked implementations without leakages combination pre-preprocessing and with less assumptions than classic high-order attacks. To illustrate these properties, we present a series of experiments performed on simulated data and real traces collected from the ChipWhisperer board and from the ASCAD database [3]. The results of our experiments demonstrate the interests of this new method and show that this attack can be performed in practice.",
"title": ""
}
] | scidocsrr |
bc66ec751e7ce368347c821c4b761d56 | Smart Cars on Smart Roads : Problems of Control | [
{
"docid": "436900539406faa9ff34c1af12b6348d",
"text": "The accomplishments to date on the development of automatic vehicle control (AVC) technology in the Program on Advanced Technology for the Highway (PATH) at the University of California, Berkeley, are summarized. The basic prqfiiples and assumptions underlying the PATH work are identified, ‘followed by explanations of the work on automating vehicle lateral (steering) and longitudinal (spacing and speed) control. For both lateral and longitudinal control, the modeling of plant dynamics is described first, followed by development of the additional subsystems needed (communications, reference/sensor systems) and the derivation of the control laws. Plans for testing on vehicles in both near and long term are then discussed.",
"title": ""
}
] | [
{
"docid": "dfbe5a92d45d4081910b868d78a904d0",
"text": "Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.",
"title": ""
},
{
"docid": "5f7aa812dc718de9508b083320c67e8a",
"text": "High power multi-level converters are deemed as the mainstay power conversion technology for renewable energy systems including the PV farm, energy storage system and electrical vehicle charge station. This paper is focused on the modeling and design of coupled and integrated magnetics in three-level DC/DC converter with multi-phase interleaved structure. The interleaved phase legs offer the benefit of output current ripple reduction, while inversed coupled inductors can suppress the circulating current between phase legs. To further reduce the magnetic volume, the four inductors in two-phase three-level DC/DC converter are integrated into one common structure, incorporating the negative coupling effects. Because of the nonlinearity of the inductor coupling, the equivalent circuit model is developed for the proposed interleaving structure to facilitate the design optimization of the integrated system. The model identifies the existence of multiple equivalent inductances during one switching cycle. A combination of them determines the inductor current ripple and dynamics of the system. By virtue of inverse coupling and means of controlling the coupling coefficients, one can minimize the current ripple and the unwanted circulating current. The fabricated prototype of the integrated coupled inductors is tested with a two-phase three-level DC/DC converter hardware, showing its good current ripple reduction performance as designed.",
"title": ""
},
{
"docid": "7b89e1ac1dcdcc1f3897e672fd934a40",
"text": "A 61-year-old female with long-standing constipation presented with increasing abdominal distention, pain, nausea and weight loss. She had been previously treated with intermittent fiber supplements and osmotic laxatives for chronic constipation. She did not use medications known to cause delayed bowel transit. Examination revealed a distended abdomen, hard stool in the rectum, and audible heart sounds throughout the abdomen. A CT scan showed severe colonic distention from stool (Fig. 1). She had no mechanical, infectious, metabolic, or endocrine-related etiology for constipation. After failing conservative management including laxative suppositories, enemas, manual disimpaction, methylnaltrexone and neostigmine, the patient underwent a colectomy with Hartmann pouch and terminal ileostomy. The removed colon measured 25.5 cm in largest diameter and weighed over 15 kg (Fig. 2). The histopathological examination demonstrated no neuronal degeneration, apoptosis or agangliosis to suggest Hirschprung’s disease or another intrinsic neuro-muscular disorder. Idiopathic megacolon is a relatively uncommon condition usually associated with slow-transit constipation. Although medical therapy is frequently ineffective, rectal laxatives, gentle enemas, and manual disimpaction can be attempted. Oral osmotic or secretory laxatives as well as unprepped lower endoscopy are relative contraindications as they may precipitate a perforation. Surgical therapy is often required as most cases are refractory to medical therapy.",
"title": ""
},
{
"docid": "8f7a27b88a29fd915e198962d8cd17ad",
"text": "For embedded high resolution successive approximation ADCs, it is necessary to determine the performance limitation of the CMOS process used for the design. This paper presents a modelling technique for major limitations, i.e. capacitor mismatch and non-linearity effects. The model is besed on Monte Carlo simulations applied to an analytical description of the ADC. Additional effects like charge injection and parasitic capacitance are included. The analytical basis covers different architectures with a fully binary weighted or series-split capacitor array. when comparing our analysis and measurement results to several conventional approaches, a significantly more realistic estimation of the attainable resolution is achieved. The presented results provide guidance in choosing process and circuit structure for the design of SAR ADCs. The model also enbles reliable capacitor sizing early in the design process, i.e. well before actual layout implementation.",
"title": ""
},
{
"docid": "c1d436c01088c2295b35a1a37e922bee",
"text": "Tourism is an important part of national economy. On the other hand it can also be a source of some negative externalities. These are mainly environmental externalities, resulting in increased pollution, aesthetic or architectural damages. High concentration of visitors may also lead to increased crime, or aggressiveness. These may have negative effects on quality of life of residents and negative experience of visitors. The paper deals with the influence of tourism on destination environment. It highlights the necessity of sustainable forms of tourism and activities to prevent negative implication of tourism, such as education activities and tourism monitoring. Key-words: Tourism, Mass Tourism, Development, Sustainability, Tourism Impact, Monitoring.",
"title": ""
},
{
"docid": "afe44962393bf0d250571f7cd7e82677",
"text": "Analytics is a field of research and practice that aims to reveal new patterns of information through the collection of large sets of data held in previously distinct sources. Growing interest in data and analytics in education, teaching, and learning raises the priority for increased, high-quality research into the models, methods, technologies, and impact of analytics. The challenges of applying analytics on academic and ethical reliability to control over data. The other challenge is that the educational landscape is extremely turbulent at present, and key challenge is the appropriate collection, protection and use of large data sets. This paper brings out challenges of multi various pertaining to the domain by offering a big data model for higher education system.",
"title": ""
},
{
"docid": "004da753abb6cb84f1ba34cfb4dacc67",
"text": "The aim of this study was to present a method for endodontic management of a maxillary first molar with unusual C-shaped morphology of the buccal root verified by cone-beam computed tomography (CBCT) images. This rare anatomical variation was confirmed using CBCT, and nonsurgical endodontic treatment was performed by meticulous evaluation of the pulpal floor. Posttreatment image revealed 3 independent canals in the buccal root obturated efficiently to the accepted lengths in all 3 canals. Our study describes a unique C-shaped variation of the root canal system in a maxillary first molar, involving the 3 buccal canals. In addition, our study highlights the usefulness of CBCT imaging for accurate diagnosis and management of this unusual canal morphology.",
"title": ""
},
{
"docid": "89432b112f153319d3a2a816c59782e3",
"text": "The Eyelink Toolbox software supports the measurement of eye movements. The toolbox provides an interface between a high-level interpreted language (MATLAB), a visual display programming toolbox (Psychophysics Toolbox), and a video-based eyetracker (Eyelink). The Eyelink Toolbox enables experimenters to measure eye movements while simultaneously executing the stimulus presentation routines provided by the Psychophysics Toolbox. Example programs are included with the toolbox distribution. Information on the Eyelink Toolbox can be found at http://psychtoolbox.org/.",
"title": ""
},
{
"docid": "2d6225b20cf13d2974ce78877642a2f7",
"text": "Low rank and sparse representation based methods, which make few specific assumptions about the background, have recently attracted wide attention in background modeling. With these methods, moving objects in the scene are modeled as pixel-wised sparse outliers. However, in many practical scenarios, the distributions of these moving parts are not truly pixel-wised sparse but structurally sparse. Meanwhile a robust analysis mechanism is required to handle background regions or foreground movements with varying scales. Based on these two observations, we first introduce a class of structured sparsity-inducing norms to model moving objects in videos. In our approach, we regard the observed sequence as being constituted of two terms, a low-rank matrix (background) and a structured sparse outlier matrix (foreground). Next, in virtue of adaptive parameters for dynamic videos, we propose a saliency measurement to dynamically estimate the support of the foreground. Experiments on challenging well known data sets demonstrate that the proposed approach outperforms the state-of-the-art methods and works effectively on a wide range of complex videos.",
"title": ""
},
{
"docid": "f53d8be1ec89cb8a323388496d45dcd0",
"text": "While Processing-in-Memory has been investigated for decades, it has not been embraced commercially. A number of emerging technologies have renewed interest in this topic. In particular, the emergence of 3D stacking and the imminent release of Micron's Hybrid Memory Cube device have made it more practical to move computation near memory. However, the literature is missing a detailed analysis of a killer application that can leverage a Near Data Computing (NDC) architecture. This paper focuses on in-memory MapReduce workloads that are commercially important and are especially suitable for NDC because of their embarrassing parallelism and largely localized memory accesses. The NDC architecture incorporates several simple processing cores on a separate, non-memory die in a 3D-stacked memory package; these cores can perform Map operations with efficient memory access and without hitting the bandwidth wall. This paper describes and evaluates a number of key elements necessary in realizing efficient NDC operation: (i) low-EPI cores, (ii) long daisy chains of memory devices, (iii) the dynamic activation of cores and SerDes links. Compared to a baseline that is heavily optimized for MapReduce execution, the NDC design yields up to 15X reduction in execution time and 18X reduction in system energy.",
"title": ""
},
{
"docid": "08c26880862b09e81acc1cd99904fded",
"text": "Efficient use of high speed hardware requires operating system components be customized to the application workload. Our general purpose operating systems are ill-suited for this task. We present EbbRT, a framework for constructing per-application library operating systems for cloud applications. The primary objective of EbbRT is to enable highperformance in a tractable and maintainable fashion. This paper describes the design and implementation of EbbRT, and evaluates its ability to improve the performance of common cloud applications. The evaluation of the EbbRT prototype demonstrates memcached, run within a VM, can outperform memcached run on an unvirtualized Linux. The prototype evaluation also demonstrates an 14% performance improvement of a V8 JavaScript engine benchmark, and a node.js webserver that achieves a 50% reduction in 99th percentile latency compared to it run on Linux.",
"title": ""
},
{
"docid": "52dbfe369d1875c402220692ef985bec",
"text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.",
"title": ""
},
{
"docid": "1967de1be0b095b4a59a5bb0fdc403c0",
"text": "As the popularity of content sharing websites has increased, they have become targets for spam, phishing and the distribution of malware. On YouTube, the facility for users to post comments can be used by spam campaigns to direct unsuspecting users to malicious third-party websites. In this paper, we demonstrate how such campaigns can be tracked over time using network motif profiling, i.e. by tracking counts of indicative network motifs. By considering all motifs of up to five nodes, we identify discriminating motifs that reveal two distinctly different spam campaign strategies, and present an evaluation that tracks two corresponding active campaigns.",
"title": ""
},
{
"docid": "33c449dc56b7f844e1582bd61d87a8a4",
"text": "We can determine whether two texts are paraphrases of each other by finding out the extent to which the texts are similar. The typical lexical matching technique works by matching the sequence of tokens between the texts to recognize paraphrases, and fails when different words are used to convey the same meaning. We can improve this simple method by combining lexical with syntactic or semantic representations of the input texts. The present work makes use of syntactical information in the texts and computes the similarity between them using word similarity measures based on WordNet and lexical databases. The texts are converted into a unified semantic structural model through which the semantic similarity of the texts is obtained. An approach is presented to assess the semantic similarity and the results of applying this approach is evaluated using the Microsoft Research Paraphrase (MSRP) Corpus.",
"title": ""
},
{
"docid": "5621d7df640dbe3d757ebb600486def9",
"text": "Dynamic spectrum access is the key to solving worldwide spectrum shortage. The open wireless medium subjects DSA systems to unauthorized spectrum use by illegitimate users. This paper presents SpecGuard, the first crowdsourced spectrum misuse detection framework for DSA systems. In SpecGuard, a transmitter is required to embed a spectrum permit into its physical-layer signals, which can be decoded and verified by ubiquitous mobile users. We propose three novel schemes for embedding and detecting a spectrum permit at the physical layer. Detailed theoretical analyses, MATLAB simulations, and USRP experiments confirm that our schemes can achieve correct, low-intrusive, and fast spectrum misuse detection.",
"title": ""
},
{
"docid": "bab246f8b15931501049862066fde77f",
"text": "The upcoming Internet of Things will introduce large sensor networks including devices with very different propagation characteristics and power consumption demands. 5G aims to fulfill these requirements by demanding a battery lifetime of at least 10 years. To integrate smart devices that are located in challenging propagation conditions, IoT communication technologies furthermore have to support very deep coverage. NB-IoT and eMTC are designed to meet these requirements and thus paving the way to 5G. With the power saving options extended Discontinuous Reception and Power Saving Mode as well as the usage of large numbers of repetitions, NB-IoT and eMTC introduce new techniques to meet the 5G IoT requirements. In this paper, the performance of NB-IoT and eMTC is evaluated. Therefore, data rate, power consumption, latency and spectral efficiency are examined in different coverage conditions. Although both technologies use the same power saving techniques as well as repetitions to extend the communication range, the analysis reveals a different performance in the context of data size, rate and coupling loss. While eMTC comes with a 4% better battery lifetime than NB-IoT when considering 144 dB coupling loss, NB-IoT battery lifetime raises to 18% better performance in 164 dB coupling loss scenarios. The overall analysis shows that in coverage areas with a coupling loss of 155 dB or less, eMTC performs better, but requires much more bandwidth. Taking the spectral efficiency into account, NB-IoT is in all evaluated scenarios the better choice and more suitable for future networks with massive numbers of devices.",
"title": ""
},
{
"docid": "ac82ad870c787e759d08b1a80dc51bd2",
"text": "We consider supervised learning in the presence of very many irrelevant features, and study two different regularization methods for preventing overfitting. Focusing on logistic regression, we show that using L1 regularization of the parameters, the sample complexity (i.e., the number of training examples required to learn \"well,\") grows only logarithmically in the number of irrelevant features. This logarithmic rate matches the best known bounds for feature selection, and indicates that L1 regularized logistic regression can be effective even if there are exponentially many irrelevant features as there are training examples. We also give a lower-bound showing that any rotationally invariant algorithm---including logistic regression with L2 regularization, SVMs, and neural networks trained by backpropagation---has a worst case sample complexity that grows at least linearly in the number of irrelevant features.",
"title": ""
},
{
"docid": "87c09def017d5e32f06a887e5d06b0ff",
"text": "A blade element momentum theory propeller model is coupled with a commercial RANS solver. This allows the fully appended self propulsion of the autonomous underwater vehicle Autosub 3 to be considered. The quasi-steady propeller model has been developed to allow for circumferential and radial variations in axial and tangential inflow. The non-uniform inflow is due to control surface deflections and the bow-down pitch of the vehicle in cruise condition. The influence of propeller blade Reynolds number is included through the use of appropriate sectional lift and drag coefficients. Simulations have been performed over the vehicles operational speed range (Re = 6.8× 10 to 13.5× 10). A workstation is used for the calculations with mesh sizes up to 2x10 elements. Grid uncertainty is calculated to be 3.07% for the wake fraction. The initial comparisons with in service data show that the coupled RANS-BEMT simulation under predicts the drag of the vehicle and consequently the required propeller rpm. However, when an appropriate correction is made for the effect on resistance of various protruding sensors the predicted propulsor rpm matches well with that of in-service rpm measurements for vessel speeds (1m/s 2m/s). The developed analysis captures the important influence of the propeller blade and hull Reynolds number on overall system efficiency.",
"title": ""
},
{
"docid": "57bec1f2ee904f953463e4e41e2cb688",
"text": "Graph embedding is an important branch in Data Mining and Machine Learning, and most of recent studies are focused on preserving the hierarchical structure with less dimensions. One of such models, called Poincare Embedding, achieves the goal by using Poincare Ball model to embed hierarchical structure in hyperbolic space instead of traditionally used Euclidean space. However, Poincare Embedding suffers from two major problems: (1) performance drops as depth of nodes increases since nodes tend to lay at the boundary; (2) the embedding model is constrained with pre-constructed structures and cannot be easily extended. In this paper, we first raise several techniques to overcome the problem of low performance for deep nodes, such as using partial structure, adding regularization, and exploring sibling relations in the structure. Then we also extend the Poincare Embedding model by extracting information from text corpus and propose a joint embedding model with Poincare Embedding and Word2vec.",
"title": ""
},
{
"docid": "6228498fed5b26c0def578251aa1c749",
"text": "Observation-Level Interaction (OLI) is a sensemaking technique relying upon the interactive semantic exploration of data. By manipulating data items within a visualization, users provide feedback to an underlying mathematical model that projects multidimensional data into a meaningful two-dimensional representation. In this work, we propose, implement, and evaluate an OLI model which explicitly defines clusters within this data projection. These clusters provide targets against which data values can be manipulated. The result is a cooperative framework in which the layout of the data affects the clusters, while user-driven interactions with the clusters affect the layout of the data points. Additionally, this model addresses the OLI \"with respect to what\" problem by providing a clear set of clusters against which interaction targets are judged and computed.",
"title": ""
}
] | scidocsrr |
a6490c57d5ff74f170b49165fa9ec1de | Cooperative Co-evolution for large scale optimization through more frequent random grouping | [
{
"docid": "d099cf0b4a74ddb018775b524ec92788",
"text": "This report proposes 15 large-scale benchmark problems as an extension to the existing CEC’2010 large-scale global optimization benchmark suite. The aim is to better represent a wider range of realworld large-scale optimization problems and provide convenience and flexibility for comparing various evolutionary algorithms specifically designed for large-scale global optimization. Introducing imbalance between the contribution of various subcomponents, subcomponents with nonuniform sizes, and conforming and conflicting overlapping functions are among the major new features proposed in this report.",
"title": ""
},
{
"docid": "07bbe54e3d0c9ef27ef5f9f1f1a2150c",
"text": "Evolutionary algorithms (EAs) have been applied with success to many numerical and combinatorial optimization problems in recent years. However, they often lose their effectiveness and advantages when applied to large and complex problems, e.g., those with high dimensions. Although cooperative coevolution has been proposed as a promising framework for tackling high-dimensional optimization problems, only limited studies were reported by decomposing a high-dimensional problem into single variables (dimensions). Such methods of decomposition often failed to solve nonseparable problems, for which tight interactions exist among different decision variables. In this paper, we propose a new cooperative coevolution framework that is capable of optimizing large scale nonseparable problems. A random grouping scheme and adaptive weighting are introduced in problem decomposition and coevolution. Instead of conventional evolutionary algorithms, a novel differential evolution algorithm is adopted. Theoretical analysis is presented in this paper to show why and how the new framework can be effective for optimizing large nonseparable problems. Extensive computational studies are also carried out to evaluate the performance of newly proposed algorithm on a large number of benchmark functions with up to 1000 dimensions. The results show clearly that our framework and algorithm are effective as well as efficient for large scale evolutionary optimisation problems. We are unaware of any other evolutionary algorithms that can optimize 1000-dimension nonseparable problems as effectively and efficiently as we have done.",
"title": ""
}
] | [
{
"docid": "f35dc45e28f2483d5ac66271590b365d",
"text": "We present a vector space–based model for selectional preferences that predicts plausibility scores for argument headwords. It does not require any lexical resources (such as WordNet). It can be trained either on one corpus with syntactic annotation, or on a combination of a small semantically annotated primary corpus and a large, syntactically analyzed generalization corpus. Our model is able to predict inverse selectional preferences, that is, plausibility scores for predicates given argument heads. We evaluate our model on one NLP task (pseudo-disambiguation) and one cognitive task (prediction of human plausibility judgments), gauging the influence of different parameters and comparing our model against other model classes. We obtain consistent benefits from using the disambiguation and semantic role information provided by a semantically tagged primary corpus. As for parameters, we identify settings that yield good performance across a range of experimental conditions. However, frequency remains a major influence of prediction quality, and we also identify more robust parameter settings suitable for applications with many infrequent items.",
"title": ""
},
{
"docid": "2793e8eb1410b2379a8a416f0560df0a",
"text": "Alzheimer’s disease (AD) transgenic mice have been used as a standard AD model for basic mechanistic studies and drug discovery. These mouse models showed symbolic AD pathologies including β-amyloid (Aβ) plaques, gliosis and memory deficits but failed to fully recapitulate AD pathogenic cascades including robust phospho tau (p-tau) accumulation, clear neurofibrillary tangles (NFTs) and neurodegeneration, solely driven by familial AD (FAD) mutation(s). Recent advances in human stem cell and three-dimensional (3D) culture technologies made it possible to generate novel 3D neural cell culture models that recapitulate AD pathologies including robust Aβ deposition and Aβ-driven NFT-like tau pathology. These new 3D human cell culture models of AD hold a promise for a novel platform that can be used for mechanism studies in human brain-like environment and high-throughput drug screening (HTS). In this review, we will summarize the current progress in recapitulating AD pathogenic cascades in human neural cell culture models using AD patient-derived induced pluripotent stem cells (iPSCs) or genetically modified human stem cell lines. We will also explain how new 3D culture technologies were applied to accelerate Aβ and p-tau pathologies in human neural cell cultures, as compared the standard two-dimensional (2D) culture conditions. Finally, we will discuss a potential impact of the human 3D human neural cell culture models on the AD drug-development process. These revolutionary 3D culture models of AD will contribute to accelerate the discovery of novel AD drugs.",
"title": ""
},
{
"docid": "bcb9886f4ba3651793581e021030cde2",
"text": "This study looked at the individual difference correlates of self-rated character strengths and virtues. In all, 280 adults completed a short 24-item measure of strengths, a short personality measure of the Big Five traits and a fluid intelligence test. The Cronbach alphas for the six higher order virtues were satisfactory but factor analysis did not confirm the a priori classification yielding five interpretable factors. These factors correlated significantly with personality and intelligence. Intelligence and neuroticism were correlated negatively with all the virtues, while extraversion and conscientiousness were positively correlated with all virtues. Structural equation modeling showed personality and religiousness moderated the effect of intelligence on the virtues. Extraversion and openness were the largest correlates of the virtues. The use of shortened measured in research is discussed.",
"title": ""
},
{
"docid": "d89d80791ac8157d054652e5f1292ebb",
"text": "The Great Gatsby Curve, the observation that for OECD countries, greater crosssectional income inequality is associated with lower mobility, has become a prominent part of scholarly and policy discussions because of its implications for the relationship between inequality of outcomes and inequality of opportunities. We explore this relationship by focusing on evidence and interpretation of an intertemporal Gatsby Curve for the United States. We consider inequality/mobility relationships that are derived from nonlinearities in the transmission process of income from parents to children and the relationship that is derived from the effects of inequality of socioeconomic segregation, which then affects children. Empirical evidence for the mechanisms we identify is strong. We find modest reduced form evidence and structural evidence of an intertemporal Gatsby Curve for the US as mediated by social influences. Steven N. Durlauf Ananth Seshadri Department of Economics Department of Economics University of Wisconsin University of Wisconsin 1180 Observatory Drive 1180 Observatory Drive Madison WI, 53706 Madison WI, 53706 [email protected] [email protected]",
"title": ""
},
{
"docid": "6e02cdb0ade3479e0df03c30d9d69fa3",
"text": "Reinforcement learning is considered as a promising direction for driving policy learning. However, training autonomous driving vehicle with reinforcement learning in real environment involves non-affordable trial-and-error. It is more desirable to first train in a virtual environment and then transfer to the real environment. In this paper, we propose a novel realistic translation network to make model trained in virtual environment be workable in real world. The proposed network can convert non-realistic virtual image input into a realistic one with similar scene structure. Given realistic frames as input, driving policy trained by reinforcement learning can nicely adapt to real world driving. Experiments show that our proposed virtual to real (VR) reinforcement learning (RL) works pretty well. To our knowledge, this is the first successful case of driving policy trained by reinforcement learning that can adapt to real world driving data.",
"title": ""
},
{
"docid": "cbe1dc1b56716f57fca0977383e35482",
"text": "This project explores a novel experimental setup towards building spoken, multi-modally rich, and human-like multiparty tutoring agent. A setup is developed and a corpus is collected that targets the development of a dialogue system platform to explore verbal and nonverbal tutoring strategies in multiparty spoken interactions with embodied agents. The dialogue task is centered on two participants involved in a dialogue aiming to solve a card-ordering game. With the participants sits a tutor that helps the participants perform the task and organizes and balances their interaction. Different multimodal signals captured and auto-synchronized by different audio-visual capture technologies were coupled with manual annotations to build a situated model of the interaction based on the participants personalities, their temporally-changing state of attention, their conversational engagement and verbal dominance, and the way these are correlated with the verbal and visual feedback, turn-management, and conversation regulatory actions generated by the tutor. At the end of this chapter we discuss the potential areas of research and developments this work opens and some of the challenges that lie in the road ahead.",
"title": ""
},
{
"docid": "5c112eb4be8321d79b63790e84de278f",
"text": "Service-dominant logic continues its evolution, facilitated by an active community of scholars throughout the world. Along its evolutionary path, there has been increased recognition of the need for a crisper and more precise delineation of the foundational premises and specification of the axioms of S-D logic. It also has become apparent that a limitation of the current foundational premises/axioms is the absence of a clearly articulated specification of the mechanisms of (often massive-scale) coordination and cooperation involved in the cocreation of value through markets and, more broadly, in society. This is especially important because markets are even more about cooperation than about the competition that is more frequently discussed. To alleviate this limitation and facilitate a better understanding of cooperation (and coordination), an eleventh foundational premise (fifth axiom) is introduced, focusing on the role of institutions and institutional arrangements in systems of value cocreation: service ecosystems. Literature on institutions across multiple social disciplines, including marketing, is briefly reviewed and offered as further support for this fifth axiom.",
"title": ""
},
{
"docid": "3ee79d711d6f8d1bbaef7e348a1c8dbc",
"text": "As a commentary to Juhani Iivari’s insightful essay, I briefly analyze design science research as an embodiment of three closely related cycles of activities. The Relevance Cycle inputs requirements from the contextual environment into the research and introduces the research artifacts into environmental field testing. The Rigor Cycle provides grounding theories and methods along with domain experience and expertise from the foundations knowledge base into the research and adds the new knowledge generated by the research to the growing knowledge base. The central Design Cycle supports a tighter loop of research activity for the construction and evaluation of design artifacts and processes. The recognition of these three cycles in a research project clearly positions and differentiates design science from other research paradigms. The commentary concludes with a claim to the pragmatic nature of design science.",
"title": ""
},
{
"docid": "3d238cc92a56e64f32f08e0833d117b3",
"text": "The efficiency of two biomass pretreatment technologies, dilute acid hydrolysis and dissolution in an ionic liquid, are compared in terms of delignification, saccharification efficiency and saccharide yields with switchgrass serving as a model bioenergy crop. When subject to ionic liquid pretreatment (dissolution and precipitation of cellulose by anti-solvent) switchgrass exhibited reduced cellulose crystallinity, increased surface area, and decreased lignin content compared to dilute acid pretreatment. Pretreated material was characterized by powder X-ray diffraction, scanning electron microscopy, Fourier transform infrared spectroscopy, Raman spectroscopy and chemistry methods. Ionic liquid pretreatment enabled a significant enhancement in the rate of enzyme hydrolysis of the cellulose component of switchgrass, with a rate increase of 16.7-fold, and a glucan yield of 96.0% obtained in 24h. These results indicate that ionic liquid pretreatment may offer unique advantages when compared to the dilute acid pretreatment process for switchgrass. However, the cost of the ionic liquid process must also be taken into consideration.",
"title": ""
},
{
"docid": "709a6b1a5c49bf0e41a24ed5a6b392c9",
"text": "Th e paper presents a literature review of the main concepts of hotel revenue management (RM) and current state-of-the-art of its theoretical research. Th e article emphasises on the diff erent directions of hotel RM research and is structured around the elements of the hotel RM system and the stages of RM process. Th e elements of the hotel RM system discussed in the paper include hotel RM centres (room division, F&B, function rooms, spa & fi tness facilities, golf courses, casino and gambling facilities, and other additional services), data and information, the pricing (price discrimination, dynamic pricing, lowest price guarantee) and non-pricing (overbookings, length of stay control, room availability guarantee) RM tools, the RM software, and the RM team. Th e stages of RM process have been identifi ed as goal setting, collection of data and information, data analysis, forecasting, decision making, implementation and monitoring. Additionally, special attention is paid to ethical considerations in RM practice, the connections between RM and customer relationship management, and the legal aspect of RM. Finally, the article outlines future research perspectives and discloses potential evolution of RM in future.",
"title": ""
},
{
"docid": "464b66e2e643096bd344bea8026f4780",
"text": "In this paper we describe an application of our approach to temporal text mining in Competitive Intelligence for the biotechnology and pharmaceutical industry. The main objective is to identify changes and trends of associations among entities of interest that appear in text over time. Text Mining (TM) exploits information contained in textual data in various ways, including the type of analyses that are typically performed in Data Mining [17]. Information Extraction (IE) facilitates the semi-automatic creation of metadata repositories from text. Temporal Text mining combines Information Extraction and Data Mining techniques upon textual repositories and incorporates time and ontologies‟ issues. It consists of three main phases; the Information Extraction phase, the ontology driven generalisation of templates and the discovery of associations over time. Treatment of the temporal dimension is essential to our approach since it influences both the annotation part (IE) of the system as well as the mining part.",
"title": ""
},
{
"docid": "7232ba57ae29c9ec395fe2b4501b6fd3",
"text": "We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models trained to distinguish real data from model-generated data. We show theoretical conditions under which incorporating a new model in the ensemble will improve the fit and empirically demonstrate the effectiveness of our black-box boosting algorithms on density estimation, classification, and sample generation on benchmark datasets for a wide range of generative models.",
"title": ""
},
{
"docid": "02e1e622c64b67c1a170ce36a3873082",
"text": "As retrieval systems become more complex, learning to rank approaches are being developed to automatically tune their parameters. Using online learning to rank, retrieval systems can learn directly from implicit feedback inferred from user interactions. In such an online setting, algorithms must obtain feedback for effective learning while simultaneously utilizing what has already been learned to produce high quality results. We formulate this challenge as an exploration–exploitation dilemma and propose two methods for addressing it. By adding mechanisms for balancing exploration and exploitation during learning, each method extends a state-of-the-art learning to rank method, one based on listwise learning and the other on pairwise learning. Using a recently developed simulation framework that allows assessment of online performance, we empirically evaluate both methods. Our results show that balancing exploration and exploitation can substantially and significantly improve the online retrieval performance of both listwise and pairwise approaches. In addition, the results demonstrate that such a balance affects the two approaches in different ways, especially when user feedback is noisy, yielding new insights relevant to making online learning to rank effective in practice.",
"title": ""
},
{
"docid": "84750fa3f3176d268ae85830a87f7a24",
"text": "Context: The pull-based model, widely used in distributed software development, offers an extremely low barrier to entry for potential contributors (anyone can submit of contributions to any project, through pull-requests). Meanwhile, the project’s core team must act as guardians of code quality, ensuring that pull-requests are carefully inspected before being merged into the main development line. However, with pull-requests becoming increasingly popular, the need for qualified reviewers also increases. GitHub facilitates this, by enabling the crowd-sourcing of pull-request reviews to a larger community of coders than just the project’s core team, as a part of their social coding philosophy. However, having access to more potential reviewers does not necessarily mean that it’s easier to find the right ones (the “needle in a haystack” problem). If left unsupervised, this process may result in communication overhead and delayed pull-request processing. Objective: This study aims to investigate whether and how previous approaches used in bug triaging and code review can be adapted to recommending reviewers for pull-requests, and how to improve the recommendation performance. Method: First, we extend three typical approaches used in bug triaging and code review for the new challenge of assigning reviewers to pull-requests. Second, we analyze social relations between contributors and reviewers, and propose a novel approach by mining each project’s comment networks (CNs). Finally, we combine the CNs with traditional approaches, and evaluate the effectiveness of all these methods on 84 GitHub projects through both quantitative and qualitative analysis. Results: We find that CN-based recommendation can achieve, by itself, similar performance as the traditional approaches. However, the mixed approaches can achieve significant improvements compared to using either of them independently. Conclusion: Our study confirms that traditional approaches to bug triaging and code review are feasible for pull-request reviewer recommendations on GitHub. Furthermore, their performance can be improved significantly by combining them with information extracted from prior social interactions between developers on GitHub. These results prompt for novel tools to support process automation in social coding platforms, that combine social (e.g., common interests among developers) and technical factors (e.g., developers’ expertise). © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4eec0ef04e80280c07bc1e9fd41e942a",
"text": "One of the challenges with research on student engagement is the large variation in the measurement of this construct, which has made it challenging to compare fi ndings across studies. This chapter contributes to our understanding of the measurement of student in engagement in three ways. First, we describe strengths and limitations of different methods for assessing student engagement (i.e., self-report measures, experience sampling techniques, teacher ratings, interviews, and observations). Second, we compare and contrast 11 self-report survey measures of student engagement that have been used in prior research. Across these 11 measures, we describe what is measured (scale name and items), use of measure, samples, and the extent of reliability and validity information available on each measure. Finally, we outline limitations with current approaches to measurement and promising future directions. Researchers, educators, and policymakers are increasingly focused on student engagement as the key to addressing problems of low achievement, high levels of student boredom, alienation, and high dropout rates (Fredricks, Blumenfeld, & Paris, 2004 ) . Students become more disengaged as they progress from elementary to middle school, with some estimates that 25–40% of youth are showing signs of disengagement (i.e., uninvolved, apathetic, not trying very hard, and not paying attention) (Steinberg, Brown, & Dornbush, 1996 ; Yazzie-Mintz, 2007 ) . The consequences of disengagement for middle and high school youth from disadvantaged backgrounds are especially severe; these youth are less likely to graduate from high school and face limited employment prospects, increasing their risk for poverty, poorer health, and involvement in the criminal justice system (National Research Council and the Institute of Medicine, 2004 ) . Although there is growing interest in student engagement, there has been considerable variation in how this construct has been conceptualized over time (Appleton, Christenson, & Furlong, 2008 ; Fredricks et al., 2004 ; Jimerson, Campos, & Grief, 2003 ) . Scholars have used a broad range J. A. Fredricks , Ph.D. (*) Human Development , Connecticut College , New London , CT , USA e-mail: [email protected] W. McColskey , Ph.D. SERVE Center , University of North Carolina , Greensboro , NC , USA e-mail: [email protected] The Measurement of Student Engagement: A Comparative Analysis of Various Methods and Student Self-report Instruments Jennifer A. Fredricks and Wendy McColskey 764 J.A. Fredricks and W. McColskey of terms including student engagement, school engagement, student engagement in school, academic engagement, engagement in class, and engagement in schoolwork. In addition, there has been variation in the number of subcomponents of engagement including different conceptualizations. Some scholars have proposed a two-dimensional model of engagement which includes behavior (e.g., participation, effort, and positive conduct) and emotion (e.g., interest, belonging, value, and positive emotions) (Finn, 1989 ; Marks, 2000 ; Skinner, Kindermann, & Furrer, 2009b ) . More recently, others have outlined a three-component model of engagement that includes behavior, emotion, and a cognitive dimension (i.e., self-regulation, investment in learning, and strategy use) (e.g., Archaumbault, 2009 ; Fredricks et al., 2004 ; Jimerson et al., 2003 ; Wigfi eld et al., 2008 ) . Finally, Christenson and her colleagues (Appleton, Christenson, Kim, & Reschly, 2006 ; Reschly & Christenson, 2006 ) conceptualized engagement as having four dimensions: academic, behavioral, cognitive, and psychological (subsequently referred to as affective) engagement. In this model, aspects of behavior are separated into two different components: academics, which includes time on task, credits earned, and homework completion, and behavior, which includes attendance, class participation, and extracurricular participation. One commonality across the myriad of conceptualizations is that engagement is multidimensional. However, further theoretical and empirical work is needed to determine the extent to which these different dimensions are unique constructs and whether a three or four component model more accurately describes the construct of student engagement. Even when scholars have similar conceptualizations of engagement, there has been considerable variability in the content of items used in instruments. This has made it challenging to compare fi ndings from different studies. This chapter expands on our understanding of the measurement of student engagement in three ways. First, the strengths and limitations of different methods for assessing student engagement are described. Second, 11 self-report survey measures of student engagement that have been used in prior research are compared and contrasted on several dimensions (i.e., what is measured, purposes and uses, samples, and psychometric properties). Finally, we discuss limitations with current approaches to measurement. What is Student Engagement We defi ne student engagement as a meta-construct that includes behavioral, emotional, and cognitive engagement (Fredricks et al., 2004 ) . Although there are large individual bodies of literature on behavioral (i.e., time on task), emotional (i.e., interest and value), and cognitive engagement (i.e., self-regulation and learning strategies), what makes engagement unique is its potential as a multidimensional or “meta”-construct that includes these three dimensions. Behavioral engagement draws on the idea of participation and includes involvement in academic, social, or extracurricular activities and is considered crucial for achieving positive academic outcomes and preventing dropping out (Connell & Wellborn, 1991 ; Finn, 1989 ) . Other scholars defi ne behavioral engagement in terms of positive conduct, such as following the rules, adhering to classroom norms, and the absence of disruptive behavior such as skipping school or getting into trouble (Finn, Pannozzo, & Voelkl, 1995 ; Finn & Rock, 1997 ) . Emotional engagement focuses on the extent of positive (and negative) reactions to teachers, classmates, academics, or school. Others conceptualize emotional engagement as identifi cation with the school, which includes belonging, or a feeling of being important to the school, and valuing, or an appreciation of success in school-related outcomes (Finn, 1989 ; Voelkl, 1997 ) . Positive emotional engagement is presumed to create student ties to the institution and infl uence their willingness to do the work (Connell & Wellborn, 1991 ; Finn, 1989 ) . Finally, cognitive engagement is defi ned as student’s level of investment in learning. It includes being thoughtful, strategic, and willing to exert the necessary effort for comprehension of complex ideas or mastery of diffi cult skills (Corno & Mandinach, 1983 ; Fredricks et al., 2004 ; Meece, Blumenfeld, & Hoyle, 1988 ) . 765 37 The Measurement of Student Engagement... An important question is how engagement differs from motivation. Although the terms are used interchangeably by some, they are different and the distinctions between them are important. Motivation refers to the underlying reasons for a given behavior and can be conceptualized in terms of the direction, intensity, quality, and persistence of one’s energies (Maehr & Meyer, 1997 ) . A proliferation of motivational constructs (e.g., intrinsic motivation, goal theory, and expectancy-value models) have been developed to answer two broad questions “Can I do this task” and “Do I want to do this task and why?” ( Eccles, Wigfi eld, & Schiefele, 1998 ) . One commonality across these different motivational constructs is an emphasis on individual differences and underlying psychological processes. In contrast, engagement tends to be thought of in terms of action, or the behavioral, emotional, and cognitive manifestations of motivation (Skinner, Kindermann, Connell, & Wellborn, 2009a ) . An additional difference is that engagement refl ects an individual’s interaction with context (Fredricks et al., 2004 ; Russell, Ainsley, & Frydenberg, 2005 ) . In other words, an individual is engaged in something (i.e., task, activity, and relationship), and their engagement cannot be separated from their environment. This means that engagement is malleable and is responsive to variations in the context that schools can target in interventions (Fredricks et al., 2004 ; Newmann, Wehlage, & Lamborn, 1992 ). The self-system model of motivational development (Connell, 1990 ; Connell & Wellborn, 1991 ; Deci & Ryan, 1985 ) provides one theoretical model for studying motivation and engagement. This model is based on the assumption that individuals have three fundamental motivational needs: autonomy, competence, and relatedness. If schools provide children with opportunities to meet these three needs, students will be more engaged. Students’ need for relatedness is more likely to occur in classrooms where teachers and peers create a caring and supportive environment; their need for autonomy is met when they feel like they have a choice and when they are motivated by internal rather than external factors; and their need for competence is met when they experience the classroom as optimal in structure and feel like they can achieve desired ends (Fredricks et al., 2004 ) . In contrast, if students experience schools as uncaring, coercive, and unfair, they will become disengaged or disaffected (Skinner et al., 2009a, 2009b ) . This model assumes that motivation is a necessary but not suffi cient precursor to engagement (Appleton et al., 2008 ; Connell & Wellborn, 1991 ) . Methods for Studying Engagement",
"title": ""
},
{
"docid": "678d9eab7d1e711f97bf8ef5aeaebcc4",
"text": "This work presents a study of current and future bus systems with respect to their security against various malicious attacks. After a brief description of the most well-known and established vehicular communication systems, we present feasible attacks and potential exposures for these automotive networks. We also provide an approach for secured automotive communication based on modern cryptographic mechanisms that provide secrecy, manipulation prevention and authentication to solve most of the vehicular bus security issues.",
"title": ""
},
{
"docid": "225b834e820b616e0ccfed7259499fd6",
"text": "Introduction: Actinic cheilitis (AC) is a lesion potentially malignant that affects the lips after prolonged exposure to solar ultraviolet (UV) radiation. The present study aimed to assess and describe the proliferative cell activity, using silver-stained nucleolar organizer region (AgNOR) quantification proteins, and to investigate the potential associations between AgNORs and the clinical aspects of AC lesions. Materials and methods: Cases diagnosed with AC were selected and reviewed from Center of Histopathological Diagnosis of the Institute of Biological Sciences, Passo Fundo University, Brazil. Clinical data including clinical presentation of the patients affected with AC were collected. The AgNOR techniques were performed in all recovered cases. The different microscopic areas of interest were printed with magnification of *1000, and in each case, 200 epithelial cell nuclei were randomly selected. The mean quantity in each nucleus for NORs was recorded. One-way analysis of variance was used for statistical analysis. Results: A total of 22 cases of AC were diagnosed. The patients were aged between 46 and 75 years (mean age: 55 years). Most of the patients affected were males presenting asymptomatic white plaque lesions in the lower lip. The mean value quantified for AgNORs was 2.4 ± 0.63, ranging between 1.49 and 3.82. No statistically significant difference was observed associating the quantity of AgNORs with the clinical aspects collected from the patients (p > 0.05). Conclusion: The present study reports the lack of association between the proliferative cell activity and the clinical aspects observed in patients affected by AC through the quantification of AgNORs. Clinical significance: Knowing the potential relation between the clinical aspects of AC and the proliferative cell activity quantified by AgNORs could play a significant role toward the early diagnosis of malignant lesions in the clinical practice. Keywords: Actinic cheilitis, Proliferative cell activity, Silver-stained nucleolar organizer regions.",
"title": ""
},
{
"docid": "8a32bdadcaa2c94f83e95c19e400835b",
"text": "Create a short summary of your paper (200 words), double-spaced. Your summary will say something like: In this action research study of my classroom of 7 grade mathematics, I investigated ______. I discovered that ____________. As a result of this research, I plan to ___________. You now begin your paper. Pages should be numbered, with the first page of text following the abstract as page one. (In Microsoft Word: after your abstract, rather than inserting a “page break” insert a “section break” to start on the next page; this will allow you to start the 3 page being numbered as page 1). You should divide this report of your research into sections. We should be able to identity the following sections and you may use these headings (headings should be bold, centered, and capitalized). Consider the page length to be a minimum.",
"title": ""
},
{
"docid": "c0484f3055d7e7db8dfea9d4483e1e06",
"text": "Metastasis the spread of cancer cells to distant organs, is the main cause of death for cancer patients. Metastasis is often mediated by lymphatic vessels that invade the primary tumor, and an early sign of metastasis is the presence of cancer cells in the regional lymph node (the first lymph node colonized by metastasizing cancer cells from a primary tumor). Understanding the interplay between tumorigenesis and lymphangiogenesis (the formation of lymphatic vessels associated with tumor growth) will provide us with new insights into mechanisms that modulate metastatic spread. In the long term, these insights will help to define new molecular targets that could be used to block lymphatic vessel-mediated metastasis and increase patient survival. Here, we review the molecular mechanisms of embryonic lymphangiogenesis and those that are recapitulated in tumor lymphangiogenesis, with a view to identifying potential targets for therapies designed to suppress tumor lymphangiogenesis and hence metastasis.",
"title": ""
},
{
"docid": "88ac730e4e54ecc527bcd188b7cc5bf5",
"text": "In this paper we outline the nature of Neuro-linguistic Programming and explore its potential for learning and teaching. The paper draws on current research by Mathison (2003) to illustrate the role of language and internal imagery in teacherlearner interactions, and the way language influences beliefs about learning. Neuro-linguistic Programming (NLP) developed in the USA in the 1970's. It has achieved widespread popularity as a method for communication and personal development. The title, coined by the founders, Bandler and Grinder (1975a), refers to purported systematic, cybernetic links between a person's internal experience (neuro), their language (linguistic) and their patterns of behaviour (programming). In essence NLP is a form of modelling that offers potential for systematic and detailed understanding of people's subjective experience. NLP is eclectic, drawing on models and strategies from a wide range of sources. We outline NLP's approach to teaching and learning, and explore applications through illustrative data from Mathison's study. A particular implication for the training of educators is that of attention to communication skills. Finally we summarise criticisms of NLP that may represent obstacles to its acceptance by academe.",
"title": ""
}
] | scidocsrr |
37daee87cefd6eabae129bc0df7338dd | Blockchain distributed ledger technologies for biomedical and health care applications | [
{
"docid": "9e65315d4e241dc8d4ea777247f7c733",
"text": "A long-standing focus on compliance has traditionally constrained development of fundamental design changes for Electronic Health Records (EHRs). We now face a critical need for such innovation, as personalization and data science prompt patients to engage in the details of their healthcare and restore agency over their medical data. In this paper, we propose MedRec: a novel, decentralized record management system to handle EHRs, using blockchain technology. Our system gives patients a comprehensive, immutable log and easy access to their medical information across providers and treatment sites. Leveraging unique blockchain properties, MedRec manages authentication, confidentiality, accountability and data sharing—crucial considerations when handling sensitive information. A modular design integrates with providers' existing, local data storage solutions, facilitating interoperability and making our system convenient and adaptable. We incentivize medical stakeholders (researchers, public health authorities, etc.) to participate in the network as blockchain “miners”. This provides them with access to aggregate, anonymized data as mining rewards, in return for sustaining and securing the network via Proof of Work. MedRec thus enables the emergence of data economics, supplying big data to empower researchers while engaging patients and providers in the choice to release metadata. The purpose of this paper is to expose, in preparation for field tests, a working prototype through which we analyze and discuss our approach and the potential for blockchain in health IT and research.",
"title": ""
},
{
"docid": "8780b620d228498447c4f1a939fa5486",
"text": "A new mechanism is proposed for securing a blockchain applied to contracts management such as digital rights management. This mechanism includes a new consensus method using a credibility score and creates a hybrid blockchain by alternately using this new method and proof-of-stake. This makes it possible to prevent an attacker from monopolizing resources and to keep securing blockchains.",
"title": ""
}
] | [
{
"docid": "91c0bd1c3faabc260277c407b7c6af59",
"text": "In this paper, we consider the Direct Perception approach for autonomous driving. Previous efforts in this field focused more on feature extraction of the road markings and other vehicles in the scene rather than on the autonomous driving algorithm and its performance under realistic assumptions. Our main contribution in this paper is introducing a new, more robust, and more realistic Direct Perception framework and corresponding algorithm for autonomous driving. First, we compare the top 3 Convolutional Neural Networks (CNN) models in the feature extraction competitions and test their performance for autonomous driving. The experimental results showed that GoogLeNet performs the best in this application. Subsequently, we propose a deep learning based algorithm for autonomous driving, and we refer to our algorithm as GoogLenet for Autonomous Driving (GLAD). Unlike previous efforts, GLAD makes no unrealistic assumptions about the autonomous vehicle or its surroundings, and it uses only five affordance parameters to control the vehicle as compared to the 14 parameters used by prior efforts. Our simulation results show that the proposed GLAD algorithm outperforms previous Direct Perception algorithms both on empty roads and while driving with other surrounding vehicles.",
"title": ""
},
{
"docid": "45a098c09a3803271f218fafd4d951cd",
"text": "Recent years have seen a tremendous increase in the demand for wireless bandwidth. To support this demand by innovative and resourceful use of technology, future communication systems will have to shift towards higher carrier frequencies. Due to the tight regulatory situation, frequencies in the atmospheric attenuation window around 300 GHz appear very attractive to facilitate an indoor, short range, ultra high speed THz communication system. In this paper, we investigate the influence of diffuse scattering at such high frequencies on the characteristics of the communication channel and its implications on the non-line-of-sight propagation path. The Kirchhoff approach is verified by an experimental study of diffuse scattering from randomly rough surfaces commonly encountered in indoor environments using a fiber-coupled terahertz time-domain spectroscopy system to perform angle- and frequency-dependent measurements. Furthermore, we integrate the Kirchhoff approach into a self-developed ray tracing algorithm to model the signal coverage of a typical office scenario.",
"title": ""
},
{
"docid": "a9595ea31ebfe07ac9d3f7fccf0d1c05",
"text": "The growing movement of biologically inspired design is driven in part by the need for sustainable development and in part by the recognition that nature could be a source of innovation. Biologically inspired design by definition entails cross-domain analogies from biological systems to problems in engineering and other design domains. However, the practice of biologically inspired design at present typically is ad hoc, with little systemization of either biological knowledge for the purposes of engineering design or the processes of transferring knowledge of biological designs to engineering problems. In this paper we present an intricate episode of biologically inspired engineering design that unfolded over an extended period of time. We then analyze our observations in terms of why, what, how, and when questions of analogy. This analysis contributes toward a content theory of creative analogies in the context of biologically inspired design.",
"title": ""
},
{
"docid": "96363ec5134359b5bf7c8b67f67971db",
"text": "Self adaptive video games are important for rehabilitation at home. Recent works have explored different techniques with satisfactory results but these have a poor use of game design concepts like Challenge and Conservative Handling of Failure. Dynamic Difficult Adjustment with Help (DDA-Help) approach is presented as a new point of view for self adaptive video games for rehabilitation. Procedural Content Generation (PCG) and automatic helpers are used to a different work on Conservative Handling of Failure and Challenge. An experience with amblyopic children showed the proposal effectiveness, increasing the visual acuity 2-3 level following the Snellen Vision Test and improving the performance curve during the game time.",
"title": ""
},
{
"docid": "6b19d08c9aa6ecfec27452a298353e1f",
"text": "This paper presents the recent development in automatic vision based technology. Use of this technology is increasing in agriculture and fruit industry. An automatic fruit quality inspection system for sorting and grading of tomato fruit and defected tomato detection discussed here. The main aim of this system is to replace the manual inspection system. This helps in speed up the process improve accuracy and efficiency and reduce time. This system collect image from camera which is placed on conveyor belt. Then image processing is done to get required features of fruits such as texture, color and size. Defected fruit is detected based on blob detection, color detection is done based on thresholding and size detection is based on binary image of tomato. Sorting is done based on color and grading is done based on size.",
"title": ""
},
{
"docid": "1d11060907f0a2c856fdda9152b107e5",
"text": "NOTICE This report was prepared by Columbia University in the course of performing work contracted for and sponsored by the New York State Energy Research and Development Authority (hereafter \" NYSERDA \"). The opinions expressed in this report do not necessarily reflect those of NYSERDA or the State of New York, and reference to any specific product, service, process, or method does not constitute an implied or expressed recommendation or endorsement of it. Further, NYSERDA, the State of New York, and the contractor make no warranties or representations, expressed or implied, as to the fitness for particular purpose or merchantability of any product, apparatus, or service, or the usefulness, completeness, or accuracy of any processes, methods, or other information contained, described, disclosed, or referred to in this report. NYSERDA, the State of New York, and the contractor make no representation that the use of any product, apparatus, process, method, or other information will not infringe privately owned rights and will assume no liability for any loss, injury, or damage resulting from, or occurring in connection with, the use of information contained, described, disclosed, or referred to in this report. iii ABSTRACT A research project was conducted to develop a concrete material that contains recycled waste glass and reprocessed carpet fibers and would be suitable for precast concrete wall panels. Post-consumer glass and used carpets constitute major solid waste components. Therefore their beneficial use will reduce the pressure on scarce landfills and the associated costs to taxpayers. By identifying and utilizing the special properties of these recycled materials, it is also possible to produce concrete elements with improved esthetic and thermal insulation properties. Using recycled waste glass as substitute for natural aggregate in commodity products such as precast basement wall panels brings only modest economic benefits at best, because sand, gravel, and crushed stone are fairly inexpensive. However, if the esthetic properties of the glass are properly exploited, such as in building façade elements with architectural finishes, the resulting concrete panels can compete very effectively with other building materials such as natural stone. As for recycled carpet fibers, the intent of this project was to exploit their thermal properties in order to increase the thermal insulation of concrete wall panels. In this regard, only partial success was achieved, because commercially reprocessed carpet fibers improve the thermal properties of concrete only marginally, as compared with other methods, such as the use of …",
"title": ""
},
{
"docid": "ba29af46fd410829c450eed631aa9280",
"text": "We address the problem of dense visual-semantic embedding that maps not only full sentences and whole images but also phrases within sentences and salient regions within images into a multimodal embedding space. Such dense embeddings, when applied to the task of image captioning, enable us to produce several region-oriented and detailed phrases rather than just an overview sentence to describe an image. Specifically, we present a hierarchical structured recurrent neural network (RNN), namely Hierarchical Multimodal LSTM (HM-LSTM). Compared with chain structured RNN, our proposed model exploits the hierarchical relations between sentences and phrases, and between whole images and image regions, to jointly establish their representations. Without the need of any supervised labels, our proposed model automatically learns the fine-grained correspondences between phrases and image regions towards the dense embedding. Extensive experiments on several datasets validate the efficacy of our method, which compares favorably with the state-of-the-art methods.",
"title": ""
},
{
"docid": "2c39f8c440a89f72db8814e633cb5c04",
"text": "There is increasing evidence that gardening provides substantial human health benefits. However, no formal statistical assessment has been conducted to test this assertion. Here, we present the results of a meta-analysis of research examining the effects of gardening, including horticultural therapy, on health. We performed a literature search to collect studies that compared health outcomes in control (before participating in gardening or non-gardeners) and treatment groups (after participating in gardening or gardeners) in January 2016. The mean difference in health outcomes between the two groups was calculated for each study, and then the weighted effect size determined both across all and sets of subgroup studies. Twenty-two case studies (published after 2001) were included in the meta-analysis, which comprised 76 comparisons between control and treatment groups. Most studies came from the United States, followed by Europe, Asia, and the Middle East. Studies reported a wide range of health outcomes, such as reductions in depression, anxiety, and body mass index, as well as increases in life satisfaction, quality of life, and sense of community. Meta-analytic estimates showed a significant positive effect of gardening on the health outcomes both for all and sets of subgroup studies, whilst effect sizes differed among eight subgroups. Although Egger's test indicated the presence of publication bias, significant positive effects of gardening remained after adjusting for this using trim and fill analysis. This study has provided robust evidence for the positive effects of gardening on health. A regular dose of gardening can improve public health.",
"title": ""
},
{
"docid": "b2f1ec4d8ac0a8447831df4287271c35",
"text": "We present a new, robust and computationally efficient Hierarchical Bayesian model for effective topic correlation modeling. We model the prior distribution of topics by a Generalized Dirichlet distribution (GD) rather than a Dirichlet distribution as in Latent Dirichlet Allocation (LDA). We define this model as GD-LDA. This framework captures correlations between topics, as in the Correlated Topic Model (CTM) and Pachinko Allocation Model (PAM), and is faster to infer than CTM and PAM. GD-LDA is effective to avoid over-fitting as the number of topics is increased. As a tree model, it accommodates the most important set of topics in the upper part of the tree based on their probability mass. Thus, GD-LDA provides the ability to choose significant topics effectively. To discover topic relationships, we perform hyper-parameter estimation based on Monte Carlo EM Estimation. We provide results using Empirical Likelihood(EL) in 4 public datasets from TREC and NIPS. Then, we present the performance of GD-LDA in ad hoc information retrieval (IR) based on MAP, P@10, and Discounted Gain. We discuss an empirical comparison of the fitting time. We demonstrate significant improvement over CTM, LDA, and PAM for EL estimation. For all the IR measures, GD-LDA shows higher performance than LDA, the dominant topic model in IR. All these improvements with a small increase in fitting time than LDA, as opposed to CTM and PAM.",
"title": ""
},
{
"docid": "5c05ad44ac2bf3fb26cea62d563435f8",
"text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"title": ""
},
{
"docid": "c4387f3c791acc54d0a0655221947c8b",
"text": "An emerging Internet application, IPTV, has the potential to flood Internet access and backbone ISPs with massive amounts of new traffic. Although many architectures are possible for IPTV video distribution, several mesh-pull P2P architectures have been successfully deployed on the Internet. In order to gain insights into mesh-pull P2P IPTV systems and the traffic loads they place on ISPs, we have undertaken an in-depth measurement study of one of the most popular IPTV systems, namely, PPLive. We have developed a dedicated PPLive crawler, which enables us to study the global characteristics of the mesh-pull PPLive system. We have also collected extensive packet traces for various different measurement scenarios, including both campus access networks and residential access networks. The measurement results obtained through these platforms bring important insights into P2P IPTV systems. Specifically, our results show the following. 1) P2P IPTV users have the similar viewing behaviors as regular TV users. 2) During its session, a peer exchanges video data dynamically with a large number of peers. 3) A small set of super peers act as video proxy and contribute significantly to video data uploading. 4) Users in the measured P2P IPTV system still suffer from long start-up delays and playback lags, ranging from several seconds to a couple of minutes. Insights obtained in this study will be valuable for the development and deployment of future P2P IPTV systems.",
"title": ""
},
{
"docid": "31c0dc8f0a839da9260bb9876f635702",
"text": "The application of a recently developed broadband beamformer to distinguish audio signals received from different directions is experimentally tested. The beamformer combines spatial and temporal subsampling using a nested array and multirate techniques which leads to the same region of support in the frequency domain for all subbands. This allows using the same beamformer for all subbands. The experimental set-up is presented and the recorded signals are analyzed. Results indicate that the proposed approach can be used to distinguish plane waves propagating with different direction of arrivals.",
"title": ""
},
{
"docid": "7f6b4a74f88d5ae1a4d21948aac2e260",
"text": "The PEP-R (psychoeducational profile revised) is an instrument that has been used in many countries to assess abilities and formulate treatment programs for children with autism and related developmental disorders. To the end to provide further information on the PEP-R's psychometric properties, a large sample (N = 137) of children presenting Autistic Disorder symptoms under the age of 12 years, including low-functioning individuals, was examined. Results yielded data of interest especially in terms of: Cronbach's alpha, interrater reliability, and validation with the Vineland Adaptive Behavior Scales. These findings help complete the instrument's statistical description and augment its usefulness, not only in designing treatment programs for these individuals, but also as an instrument for verifying the efficacy of intervention.",
"title": ""
},
{
"docid": "a81e4507632505b64f4839a1a23fa440",
"text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.",
"title": ""
},
{
"docid": "45f1964932b06f23b7b0556bfb4d2d24",
"text": "We present a real-time deep learning framework for video-based facial performance capture---the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5--10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character. We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.",
"title": ""
},
{
"docid": "66cde02bdf134923ca7ef3ec5c4f0fb8",
"text": "In this paper a method for holographic localization of passive UHF-RFID transponders is presented. It is shown how persons or devices that are equipped with a RFID reader and that are moving along a trajectory can be enabled to locate tagged objects reliably. The localization method is based on phase values sampled from a synthetic aperture by a RFID reader. The calculated holographic image is a spatial probability density function that reveals the actual RFID tag position. Experimental results are presented which show that the holographically measured positions are in good agreement with the real position of the tag. Additional simulations have been carried out to investigate the positioning accuracy of the proposed method depending on different distortion parameters and measuring conditions. The effect of antenna phase center displacement is briefly discussed and measurements are shown that quantify the influence on the phase measurement.",
"title": ""
},
{
"docid": "7eea90d85df0245eac0de51702efdbfd",
"text": "Mobile wellness application is widely used for assisting self-monitoring practice to monitor user's daily food intake and physical activities. Although these mostly free downloadable mobile application is easy to use and covers many aspects of wellness routines, there is no proof of prolonged use. Previous research reported that user will stop using the application and turned back into their old attitude of food consumptions. The purpose of this study is to examine the factors that influence the continuance intention to adopt a mobile phone wellness application. Review of Information System Continuance Model in the areas such as mobile health, mobile phone wellness application, social network and web 2.0, were done to examine the existing factors. From the critical review, two external factors namely Social Norm and Perceive Interactivity is believed to have the ability to explain the social perspective behavior and also the effect of perceiving interactivity towards prolong usage of wellness mobile application. These findings contribute to the development of the Mobile Phones Wellness Application Continuance Use theoretical model.",
"title": ""
},
{
"docid": "3cdca28361b7c2b9525b476e9073fc10",
"text": "The proliferation of MP3 players and the exploding amount of digital music content call for novel ways of music organization and retrieval to meet the ever-increasing demand for easy and effective information access. As almost every music piece is created to convey emotion, music organization and retrieval by emotion is a reasonable way of accessing music information. A good deal of effort has been made in the music information retrieval community to train a machine to automatically recognize the emotion of a music signal. A central issue of machine recognition of music emotion is the conceptualization of emotion and the associated emotion taxonomy. Different viewpoints on this issue have led to the proposal of different ways of emotion annotation, model training, and result visualization. This article provides a comprehensive review of the methods that have been proposed for music emotion recognition. Moreover, as music emotion recognition is still in its infancy, there are many open issues. We review the solutions that have been proposed to address these issues and conclude with suggestions for further research.",
"title": ""
},
{
"docid": "89e88b92adc44176f0112a66ec92515a",
"text": "Computer programming is being introduced in schools worldwide as part of a movement that promotes Computational Thinking (CT) skills among young learners. In general, learners use visual, block-based programming languages to acquire these skills, with Scratch being one of the most popular ones. Similar to professional developers, learners also copy and paste their code, resulting in duplication. In this paper we present the findings of correlating the assessment of the CT skills of learners with the presence of software clones in over 230,000 projects obtained from the Scratch platform. Specifically, we investigate i) if software cloning is an extended practice in Scratch projects, ii) if the presence of code cloning is independent of the programming mastery of learners, iii) if code cloning can be found more frequently in Scratch projects that require specific skills (as parallelism or logical thinking), and iv) if learners who have the skills to avoid software cloning really do so. The results show that i) software cloning can be commonly found in Scratch projects, that ii) it becomes more frequent as learners work on projects that require advanced skills, that iii) no CT dimension is to be found more related to the absence of software clones than others, and iv) that learners -even if they potentially know how to avoid cloning- still copy and paste frequently. The insights from this paper could be used by educators and learners to determine when it is pedagogically more effective to address software cloning, by educational programming platform developers to adapt their systems, and by learning assessment tools to provide better evaluations.",
"title": ""
},
{
"docid": "e8215231e8eb26241d5ac8ac5be4b782",
"text": "This research is on the use of a decision tree approach for predicting students‟ academic performance. Education is the platform on which a society improves the quality of its citizens. To improve on the quality of education, there is a need to be able to predict academic performance of the students. The IBM Statistical Package for Social Studies (SPSS) is used to apply the Chi-Square Automatic Interaction Detection (CHAID) in producing the decision tree structure. Factors such as the financial status of the students, motivation to learn, gender were discovered to affect the performance of the students. 66.8% of the students were predicted to have passed while 33.2% were predicted to fail. It is observed that much larger percentage of the students were likely to pass and there is also a higher likely of male students passing than female students.",
"title": ""
}
] | scidocsrr |
5d0c7a76bcf5ff7fb4c681a1bd5496d1 | GPS Spoofing Detection Based on Decision Fusion with a K-out-of-N Rule | [
{
"docid": "a9bc9d9098fe852d13c3355ab6f81edb",
"text": "The area under the ROC curve, or the equivalent Gini index, is a widely used measure of performance of supervised classification rules. It has the attractive property that it side-steps the need to specify the costs of the different kinds of misclassification. However, the simple form is only applicable to the case of two classes. We extend the definition to the case of more than two classes by averaging pairwise comparisons. This measure reduces to the standard form in the two class case. We compare its properties with the standard measure of proportion correct and an alternative definition of proportion correct based on pairwise comparison of classes for a simple artificial case and illustrate its application on eight data sets. On the data sets we examined, the measures produced similar, but not identical results, reflecting the different aspects of performance that they were measuring. Like the area under the ROC curve, the measure we propose is useful in those many situations where it is impossible to give costs for the different kinds of misclassification.",
"title": ""
},
{
"docid": "531d387a14eefa6a8c45ad64039f29be",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
}
] | [
{
"docid": "f905016b422d9c16ac11b85182f196c7",
"text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.",
"title": ""
},
{
"docid": "fb5e9a15429c9361dbe577ca8db18e46",
"text": "Most experiments are done in laboratories. However, there is also a theory and practice of field experimentation. It has had its successes and failures over the past four decades but is now increasingly used for answering causal questions. This is true for both randomized and-perhaps more surprisingly-nonrandomized experiments. In this article, we review the history of the use of field experiments, discuss some of the reasons for their current renaissance, and focus the bulk of the article on the particular technical developments that have made this renaissance possible across four kinds of widely used experimental and quasi-experimental designs-randomized experiments, regression discontinuity designs in which those units above a cutoff get one treatment and those below get another, short interrupted time series, and nonrandomized experiments using a nonequivalent comparison group. We focus this review on some of the key technical developments addressing problems that previously stymied accurate effect estimation, the solution of which opens the way for accurate estimation of effects under the often difficult conditions of field implementation-the estimation of treatment effects under partial treatment implementation, the prevention and analysis of attrition, analysis of nested designs, new analytic developments for both regression discontinuity designs and short interrupted time series, and propensity score analysis. We also cover the key empirical evidence showing the conditions under which some nonrandomized experiments may be able to approximate results from randomized experiments.",
"title": ""
},
{
"docid": "9efa0ff0743edacc4e9421ed45441fde",
"text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.",
"title": ""
},
{
"docid": "361bc333d47d2e1d4b6a6e8654d2659d",
"text": "Both the industrial organization theory (IO) and the resource-based view of the firm (RBV) have advanced our understanding of the antecedents of competitive advantage but few have attempted to verify the outcome variables of competitive advantage and the persistence of such outcome variables. Here by integrating both IO and RBV perspectives in the analysis of competitive advantage at the firm level, our study clarifies a conceptual distinction between two types of competitive advantage: temporary competitive advantage and sustainable competitive advantage, and explores how firms transform temporary competitive advantage into sustainable competitive advantage. Testing of the developed hypotheses, based on a survey of 165 firms from Taiwan’s information and communication technology industry, suggests that firms with a stronger market position can only attain a better outcome of temporary competitive advantage whereas firms possessing a superior position in technological resources or capabilities can attain a better outcome of sustainable competitive advantage. More importantly, firms can leverage a temporary competitive advantage as an outcome of market position, to improving their technological resource and capability position, which in turn can enhance their sustainable competitive advantage.",
"title": ""
},
{
"docid": "0b0e935d88fb5eb6b964e7e0853a7f2f",
"text": "Skill prerequisite information is useful for tutoring systems that assess student knowledge or that provide remediation. These systems often encode prerequisites as graphs designed by subject matter experts in a costly and time-consuming process. In this paper, we introduce Combined student Modeling and prerequisite Discovery (COMMAND), a novel algorithm for jointly inferring a prerequisite graph and a student model from data. Learning a COMMAND model requires student performance data and a mapping of items to skills (Q-matrix). COMMAND learns the skill prerequisite relations as a Bayesian network (an encoding of the probabilistic dependence among the skills) via a two-stage learning process. In the first stage, it uses an algorithm called Structural Expectation Maximization to select a class of equivalent Bayesian networks; in the second stage, it uses curriculum information to select a single Bayesian network. Our experiments on simulations and real student data suggest that COMMAND is better than prior methods in the literature.",
"title": ""
},
{
"docid": "6ad344c7049abad62cd53dacc694c651",
"text": "Primary syphilis with oropharyngeal manifestations should be kept in mind, though. Lips and tongue ulcers are the most frequently reported lesions and tonsillar ulcers are much more rare. We report the case of a 24-year-old woman with a syphilitic ulcer localized in her left tonsil.",
"title": ""
},
{
"docid": "6325188ee21b6baf65dbce6855c19bc2",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
},
{
"docid": "57f5b00d796489b7f5caee701ce3116b",
"text": "SR-IOV capable network devices offer the benefits of direct I/O throughput and reduced CPU utilization while greatly increasing the scalability and sharing capabilities of the device. SR-IOV allows the benefits of the paravirtualized driver’s throughput increase and additional CPU usage reductions in HVMs (Hardware Virtual Machines). SR-IOV uses direct I/O assignment of a network device to multiple VMs, maximizing the potential for using the full bandwidth capabilities of the network device, as well as enabling unmodified guest OS based device drivers which will work for different underlying VMMs. Drawing on our recent experience in developing an SR-IOV capable networking solution for the Xen hypervisor we discuss the system level requirements and techniques for SR-IOV enablement on the platform. We discuss PCI configuration considerations, direct MMIO, interrupt handling and DMA into an HVM using an IOMMU (I/O Memory Management Unit). We then explain the architectural, design and implementation considerations for SR-IOV networking in Xen in which the Physical Function has a driver running in the driver domain that serves as a “master” and each Virtual Function exposed to a guest VM has its own virtual driver.",
"title": ""
},
{
"docid": "ae151d8ed9b8f99cfe22e593f381dd3b",
"text": "A common assumption in studies of interruptions is that one is focused in an activity and then distracted by other stimuli. We take the reverse perspective and examine whether one might first be in an attentional state that makes one susceptible to communications typically associated with distraction. We explore the confluence of multitasking and workplace communications from three temporal perspectives -- prior to an interaction, when tasks and communications are interleaved, and at the end of the day. Using logging techniques and experience sampling, we observed 32 employees in situ for five days. We found that certain attentional states lead people to be more susceptible to particular types of interaction. Rote work is followed by more Facebook or face-to-face interaction. Focused and aroused states are followed by more email. The more time in email and face-fo-face interaction, and the more total screen switches, the less productive people feel at the day's end. We present the notion of emotional homeostasis along with new directions for multitasking research.",
"title": ""
},
{
"docid": "4621856b479672433f9f9dff86d4f4da",
"text": "Reproducibility of computational studies is a hallmark of scientific methodology. It enables researchers to build with confidence on the methods and findings of others, reuse and extend computational pipelines, and thereby drive scientific progress. Since many experimental studies rely on computational analyses, biologists need guidance on how to set up and document reproducible data analyses or simulations. In this paper, we address several questions about reproducibility. For example, what are the technical and non-technical barriers to reproducible computational studies? What opportunities and challenges do computational notebooks offer to overcome some of these barriers? What tools are available and how can they be used effectively? We have developed a set of rules to serve as a guide to scientists with a specific focus on computational notebook systems, such as Jupyter Notebooks, which have become a tool of choice for many applications. Notebooks combine detailed workflows with narrative text and visualization of results. Combined with software repositories and open source licensing, notebooks are powerful tools for transparent, collaborative, reproducible, and reusable data analyses.",
"title": ""
},
{
"docid": "6d262139067d030c3ebb1169e93c6422",
"text": "In this paper, we present a study on learning visual recognition models from large scale noisy web data. We build a new database called WebVision, which contains more than 2.4 million web images crawled from the Internet by using queries generated from the 1, 000 semantic concepts of the ILSVRC 2012 benchmark. Meta information along with those web images (e.g., title, description, tags, etc.) are also crawled. A validation set and test set containing human annotated images are also provided to facilitate algorithmic development. Based on our new database, we obtain a few interesting observations: 1) the noisy web images are sufficient for training a good deep CNN model for visual recognition; 2) the model learnt from our WebVision database exhibits comparable or even better generalization ability than the one trained from the ILSVRC 2012 dataset when being transferred to new datasets and tasks; 3) a domain adaptation issue (a.k.a., dataset bias) is observed, which means the dataset can be used as the largest benchmark dataset for visual domain adaptation. Our new WebVision database and relevant studies in this work would benefit the advance of learning state-of-the-art visual models with minimum supervision based on web data.",
"title": ""
},
{
"docid": "f825dbbc9ff17178a81be71c5b9312ae",
"text": "Skills like computational thinking, problem solving, handling complexity, team-work and project management are essential for future careers and needs to be taught to students at the elementary level itself. Computer programming knowledge and skills, experiencing technology and conducting science and engineering experiments are also important for students at elementary level. However, teaching such skills effectively through active learning can be challenging for educators. In this paper, we present our approach and experiences in teaching such skills to several elementary level children using Lego Mindstorms EV3 robotics education kit. We describe our learning environment consisting of lessons, worksheets, hands-on activities and assessment. We taught students how to design, construct and program robots using components such as motors, sensors, wheels, axles, beams, connectors and gears. Students also gained knowledge on basic programming constructs such as control flow, loops, branches and conditions using a visual programming environment. We carefully observed how students performed various tasks and solved problems. We present experimental results which demonstrates that our teaching methodology consisting of both the course content and pedagogy was effective in imparting the desired skills and knowledge to elementary level children. The students also participated in a competitive World Robot Olympiad India event and qualified during the regional round which is an evidence of the effectiveness of the approach.",
"title": ""
},
{
"docid": "1a834cb0c5d72c6bc58c4898d318cfc2",
"text": "This paper proposes a novel single-stage high-power-factor ac/dc converter with symmetrical topology. The circuit topology is derived from the integration of two buck-boost power-factor-correction (PFC) converters and a full-bridge series resonant dc/dc converter. The switch-utilization factor is improved by using two active switches to serve in the PFC circuits. A high power factor at the input line is assured by operating the buck-boost converters at discontinuous conduction mode. With symmetrical operation and elaborately designed circuit parameters, zero-voltage switching on all the active power switches of the converter can be retained to achieve high circuit efficiency. The operation modes, design equations, and design steps for the circuit parameters are proposed. A prototype circuit designed for a 200-W dc output was built and tested to verify the analytical predictions. Satisfactory performances are obtained from the experimental results.",
"title": ""
},
{
"docid": "9bf99d48bc201147a9a9ad5af547a002",
"text": "Consider a biped evolving in the sagittal plane. The unexpected rotation of the supporting foot can be avoided by controlling the zero moment point (ZMP). The objective of this study is to propose and analyze a control strategy for simultaneously regulating the position of the ZMP and the joints of the robot. If the tracking requirements were posed in the time domain, the problem would be underactuated in the sense that the number of inputs would be less than the number of outputs. To get around this issue, the proposed controller is based on a path-following control strategy, previously developed for dealing with the underactuation present in planar robots without actuated ankles. In particular, the control law is defined in such a way that only the kinematic evolution of the robot's state is regulated, but not its temporal evolution. The asymptotic temporal evolution of the robot is completely defined through a one degree-of-freedom subsystem of the closed-loop model. Since the ZMP is controlled, bipedal walking that includes a prescribed rotation of the foot about the toe can also be considered. Simple analytical conditions are deduced that guarantee the existence of a periodic motion and the convergence toward this motion.",
"title": ""
},
{
"docid": "fdb0c8d2a4c4bbe68b7cffe58adbd074",
"text": "Endowing a chatbot with personality is challenging but significant to deliver more realistic and natural conversations. In this paper, we address the issue of generating responses that are coherent to a pre-specified personality or profile. We present a method that uses generic conversation data from social media (without speaker identities) to generate profile-coherent responses. The central idea is to detect whether a profile should be used when responding to a user post (by a profile detector), and if necessary, select a key-value pair from the profile to generate a response forward and backward (by a bidirectional decoder) so that a personalitycoherent response can be generated. Furthermore, in order to train the bidirectional decoder with generic dialogue data, a position detector is designed to predict a word position from which decoding should start given a profile value. Manual and automatic evaluation shows that our model can deliver more coherent, natural, and diversified responses.",
"title": ""
},
{
"docid": "055c9fad6d2f246fc1b6cbb1bce26a92",
"text": "This work uses deep learning models for daily directional movements prediction of a stock price using financial news titles and technical indicators as input. A comparison is made between two different sets of technical indicators, set 1: Stochastic %K, Stochastic %D, Momentum, Rate of change, William’s %R, Accumulation/Distribution (A/D) oscillator and Disparity 5; set 2: Exponential Moving Average, Moving Average Convergence-Divergence, Relative Strength Index, On Balance Volume and Bollinger Bands. Deep learning methods can detect and analyze complex patterns and interactions in the data allowing a more precise trading process. Experiments has shown that Convolutional Neural Network (CNN) can be better than Recurrent Neural Networks (RNN) on catching semantic from texts and RNN is better on catching the context information and modeling complex temporal characteristics for stock market forecasting. So, there are two models compared in this paper: a hybrid model composed by a CNN for the financial news and a Long Short-Term Memory (LSTM) for technical indicators, named as SI-RCNN; and a LSTM network only for technical indicators, named as I-RNN. The output of each model is used as input for a trading agent that buys stocks on the current day and sells the next day when the model predicts that the price is going up, otherwise the agent sells stocks on the current day and buys the next day. The proposed method shows a major role of financial news in stabilizing the results and almost no improvement when comparing different sets of technical indicators.",
"title": ""
},
{
"docid": "43db7c431cac1afd33f48774ee0dbc61",
"text": "We present a diff algorithm for XML data. This work is motivated by the support for change control in the context of the Xyleme project that is investigating dynamic warehouses capable of storing massive volume of XML data. Because of the context, our algorithm has to be very efficient in terms of speed and memory space even at the cost of some loss of “quality”. Also, it considers, besides insertions, deletions and updates (standard in diffs), a move operation on subtrees that is essential in the context of XML. Intuitively, our diff algorithm uses signatures to match (large) subtrees that were left unchanged between the old and new versions. Such exact matchings are then possibly propagated to ancestors and descendants to obtain more matchings. It also uses XML specific information such as ID attributes. We provide a performance analysis of the algorithm. We show that it runs in average in linear time vs. quadratic time for previous algorithms. We present experiments on synthetic data that confirm the analysis. Since this problem is NPhard, the linear time is obtained by trading some quality. We present experiments (again on synthetic data) that show that the output of our algorithm is reasonably close to the “optimal” in terms of quality. Finally we present experiments on a small sample of XML pages found on the Web.",
"title": ""
},
{
"docid": "04ed876237214c1366f966b80ebb7fd4",
"text": "Load Balancing is essential for efficient operations indistributed environments. As Cloud Computing is growingrapidly and clients are demanding more services and betterresults, load balancing for the Cloud has become a veryinteresting and important research area. Many algorithms weresuggested to provide efficient mechanisms and algorithms forassigning the client's requests to available Cloud nodes. Theseapproaches aim to enhance the overall performance of the Cloudand provide the user more satisfying and efficient services. Inthis paper, we investigate the different algorithms proposed toresolve the issue of load balancing and task scheduling in CloudComputing. We discuss and compare these algorithms to providean overview of the latest approaches in the field.",
"title": ""
},
{
"docid": "96e9c66453ba91d1bc44bb0242f038ce",
"text": "Body temperature is one of the key parameters for health monitoring of premature infants at the neonatal intensive care unit (NICU). In this paper, we propose and demonstrate a design of non-invasive neonatal temperature monitoring with wearable sensors. A negative temperature coefficient (NTC) resistor is applied as the temperature sensor due to its accuracy and small size. Conductive textile wires are used to make the sensor integration compatible for a wearable non-invasive monitoring platform, such as a neonatal smart jacket. Location of the sensor, materials and appearance are designed to optimize the functionality, patient comfort and the possibilities for aesthetic features. A prototype belt is built of soft bamboo fabrics with NTC sensor integrated to demonstrate the temperature monitoring. Experimental results from the testing on neonates at NICU of Máxima Medical Center (MMC), Veldhoven, the Netherlands, show the accurate temperature monitoring by the prototype belt comparing with the standard patient monitor.",
"title": ""
},
{
"docid": "2eebc7477084b471f9e9872ba8751359",
"text": "Despite significant progress in the development of human action detection datasets and algorithms, no current dataset is representative of real-world aerial view scenarios. We present Okutama-Action, a new video dataset for aerial view concurrent human action detection. It consists of 43 minute-long fully-annotated sequences with 12 action classes. Okutama-Action features many challenges missing in current datasets, including dynamic transition of actions, significant changes in scale and aspect ratio, abrupt camera movement, as well as multi-labeled actors. As a result, our dataset is more challenging than existing ones, and will help push the field forward to enable real-world applications.",
"title": ""
}
] | scidocsrr |
ad1d0433a6ca7d8d26521c8a6206608c | Actions speak as loud as words: predicting relationships from social behavior data | [
{
"docid": "cae43bdbf48e694b7fb509ea3b3392f1",
"text": "As user-generated content and interactions have overtaken the web as the default mode of use, questions of whom and what to trust have become increasingly important. Fortunately, online social networks and social media have made it easy for users to indicate whom they trust and whom they do not. However, this does not solve the problem since each user is only likely to know a tiny fraction of other users, we must have methods for inferring trust - and distrust - between users who do not know one another. In this paper, we present a new method for computing both trust and distrust (i.e., positive and negative trust). We do this by combining an inference algorithm that relies on a probabilistic interpretation of trust based on random graphs with a modified spring-embedding algorithm. Our algorithm correctly classifies hidden trust edges as positive or negative with high accuracy. These results are useful in a wide range of social web applications where trust is important to user behavior and satisfaction.",
"title": ""
},
{
"docid": "b12d3dfe42e5b7ee06821be7dcd11ab9",
"text": "Social media is a place where users present themselves to the world, revealing personal details and insights into their lives. We are beginning to understand how some of this information can be utilized to improve the users' experiences with interfaces and with one another. In this paper, we are interested in the personality of users. Personality has been shown to be relevant to many types of interactions, it has been shown to be useful in predicting job satisfaction, professional and romantic relationship success, and even preference for different interfaces. Until now, to accurately gauge users' personalities, they needed to take a personality test. This made it impractical to use personality analysis in many social media domains. In this paper, we present a method by which a user's personality can be accurately predicted through the publicly available information on their Twitter profile. We will describe the type of data collected, our methods of analysis, and the machine learning techniques that allow us to successfully predict personality. We then discuss the implications this has for social media design, interface design, and broader domains.",
"title": ""
}
] | [
{
"docid": "5e0110f6ae9698e8dd92aad22f1d9fcf",
"text": "Social networking sites (SNS) are especially attractive for adolescents, but it has also been shown that these users can suffer from negative psychological consequences when using these sites excessively. We analyze the role of fear of missing out (FOMO) and intensity of SNS use for explaining the link between psychopathological symptoms and negative consequences of SNS use via mobile devices. In an online survey, 1468 Spanish-speaking Latin-American social media users between 16 and 18 years old completed the Hospital Anxiety and Depression Scale (HADS), the Social Networking Intensity scale (SNI), the FOMO scale (FOMOs), and a questionnaire on negative consequences of using SNS via mobile device (CERM). Using structural equation modeling, it was found that both FOMO and SNI mediate the link between psychopathology and CERM, but by different mechanisms. Additionally, for girls, feeling depressed seems to trigger higher SNS involvement. For boys, anxiety triggers higher SNS involvement.",
"title": ""
},
{
"docid": "0441fb016923cd0b7676d3219951c230",
"text": "Globally modeling and reasoning over relations between regions can be beneficial for many computer vision tasks on both images and videos. Convolutional Neural Networks (CNNs) excel at modeling local relations by convolution operations, but they are typically inefficient at capturing global relations between distant regions and require stacking multiple convolution layers. In this work, we propose a new approach for reasoning globally in which a set of features are globally aggregated over the coordinate space and then projected to an interaction space where relational reasoning can be efficiently computed. After reasoning, relation-aware features are distributed back to the original coordinate space for down-stream tasks. We further present a highly efficient instantiation of the proposed approach and introduce the Global Reasoning unit (GloRe unit) that implements the coordinate-interaction space mapping by weighted global pooling and weighted broadcasting, and the relation reasoning via graph convolution on a small graph in interaction space. The proposed GloRe unit is lightweight, end-to-end trainable and can be easily plugged into existing CNNs for a wide range of tasks. Extensive experiments show our GloRe unit can consistently boost the performance of state-of-the-art backbone architectures, including ResNet [15, 16], ResNeXt [33], SE-Net [18] and DPN [9], for both 2D and 3D CNNs, on image classification, semantic segmentation and video action recognition task.",
"title": ""
},
{
"docid": "48623054af5217d48b05aed57a67ae66",
"text": "This paper proposes an ontology-based approach to analyzing and assessing the security posture for software products. It provides measurements of trust for a software product based on its security requirements and evidence of assurance, which are retrieved from an ontology built for vulnerability management. Our approach differentiates with the previous work in the following aspects: (1) It is a holistic approach emphasizing that the system assurance cannot be determined or explained by its component assurance alone. Instead, the software system as a whole determines its assurance level. (2) Our approach is based on widely accepted standards such as CVSS, CVE, CWE, CPE, and CAPEC. Our ontology integrated these standards seamlessly thus provides a solid foundation for security assessment. (3) Automated tools have been built to support our approach, delivering the environmental scores for software products.",
"title": ""
},
{
"docid": "0c0388754f2964f1db05df3b62cd7389",
"text": "Considerable research has been devoted to utilizing multimodal features for better understanding multimedia data. However, two core research issues have not yet been adequately addressed. First, given a set of features extracted from multiple media sources (e.g., extracted from the visual, audio, and caption track of videos), how do we determine the best modalities? Second, once a set of modalities has been identified, how do we best fuse them to map to semantics? In this paper, we propose a two-step approach. The first step finds <i>statistically independent modalities</i> from raw features. In the second step, we use <i>super-kernel fusion</i> to determine the optimal combination of individual modalities. We carefully analyze the tradeoffs between three design factors that affect fusion performance: <i>modality independence</i>, <i>curse of dimensionality</i>, and <i>fusion-model complexity</i>. Through analytical and empirical studies, we demonstrate that our two-step approach, which achieves a careful balance of the three design factors, can improve class-prediction accuracy over traditional techniques.",
"title": ""
},
{
"docid": "8dd6a3cbe9ddb4c50beb83355db5aa5a",
"text": "Fuzzy logic controllers have gained popularity in the past few decades with highly successful implementation in many fields. Fuzzy logic enables designers to control complex systems more effectively than traditional methods. Teaching students fuzzy logic in a laboratory can be a time-consuming and an expensive task. This paper presents a low-cost educational microcontroller-based tool for fuzzy logic controlled line following mobile robot. The robot is used in the second year of undergraduate teaching in an elective course in the department of computer engineering of the Near East University. Hardware details of the robot and the software implementing the fuzzy logic control algorithm are given in the paper. 2009 Wiley Periodicals, Inc. Comput Appl Eng Educ; Published online in Wiley InterScience (www.interscience.wiley.com); DOI 10.1002/cae.20347",
"title": ""
},
{
"docid": "0da4b25ce3d4449147f7258d0189165f",
"text": "We present Listen, Attend and Spell (LAS), a neural speech recognizer that transcribes speech utterances directly to characters without pronunciation models, HMMs or other components of traditional speech recognizers. In LAS, the neural network architecture subsumes the acoustic, pronunciation and language models making it not only an end-to-end trained system but an end-to-end model. In contrast to DNN-HMM, CTC and most other models, LAS makes no independence assumptions about the probability distribution of the output character sequences given the acoustic sequence. Our system has two components: a listener and a speller. The listener is a pyramidal recurrent network encoder that accepts filter bank spectra as inputs. The speller is an attention-based recurrent network decoder that emits each character conditioned on all previous characters, and the entire acoustic sequence. On a Google voice search task, LAS achieves a WER of 14.1% without a dictionary or an external language model and 10.3% with language model rescoring over the top 32 beams. In comparison, the state-of-the-art CLDNN-HMM model achieves a WER of 8.0% on the same set.",
"title": ""
},
{
"docid": "62e445cabbb5c79375f35d7b93f9a30d",
"text": "The recent outbreak of indie games has popularized volumetric terrains to a new level, although video games have used them for decades. These terrains contain geological data, such as materials or cave systems. To improve the exploration experience and due to the large amount of data needed to construct volumetric terrains, industry uses procedural methods to generate them. However, they use their own methods, which are focused on their specific problem domains, lacking customization features. Besides, the evaluation of the procedural terrain generators remains an open issue in this field since no standard metrics have been established yet. In this paper, we propose a new approach to procedural volumetric terrains. It generates completely customizable volumetric terrains with layered materials and other features (e.g., mineral veins, underground caves, material mixtures and underground material flow). The method allows the designer to specify the characteristics of the terrain using intuitive parameters. Additionally, it uses a specific representation for the terrain based on stacked material structures, reducing memory requirements. To overcome the problem in the evaluation of the generators, we propose a new set of metrics for the generated content.",
"title": ""
},
{
"docid": "3afea784f4a9eb635d444a503266d7cd",
"text": "Gallium nitride high-electron mobility transistors (GaN HEMTs) have attractive properties, low on-resistances and fast switching speeds. This paper presents the characteristics of a normally-on GaN HEMT that we fabricated. Further, the circuit operation of a Class-E amplifier is analyzed. Experimental results demonstrate the excellent performance of the gate drive circuit for the normally-on GaN HEMT and the 13.56MHz radio frequency (RF) power amplifier.",
"title": ""
},
{
"docid": "61c3f890943c34736564680dca3aae4a",
"text": "Secondary nocturnal enuresis accounts for about one quarter of patients with bed-wetting. Although a psychological cause is responsible in some children, various other causes are possible and should be considered. This article reviews the epidemiology, psychological and social impact, causes, investigation, management, and prognosis of secondary nocturnal enuresis.",
"title": ""
},
{
"docid": "a2246533e2973193586e2a3c8e672c10",
"text": "Krill Herd (KH) optimization algorithm was recently proposed based on herding behavior of krill individuals in the nature for solving optimization problems. In this paper, we develop Standard Krill Herd (SKH) algorithm and propose Fuzzy Krill Herd (FKH) optimization algorithm which is able to dynamically adjust the participation amount of exploration and exploitation by looking the progress of solving the problem in each step. In order to evaluate the proposed FKH algorithm, we utilize some standard benchmark functions and also Inventory Control Problem. Experimental results indicate the superiority of our proposed FKH optimization algorithm in comparison with the standard KH optimization algorithm.",
"title": ""
},
{
"docid": "991a388d1159667a5b2494ded71c5abe",
"text": "Organizations around the world have called for the responsible development of nanotechnology. The goals of this approach are to emphasize the importance of considering and controlling the potential adverse impacts of nanotechnology in order to develop its capabilities and benefits. A primary area of concern is the potential adverse impact on workers, since they are the first people in society who are exposed to the potential hazards of nanotechnology. Occupational safety and health criteria for defining what constitutes responsible development of nanotechnology are needed. This article presents five criterion actions that should be practiced by decision-makers at the business and societal levels-if nanotechnology is to be developed responsibly. These include (1) anticipate, identify, and track potentially hazardous nanomaterials in the workplace; (2) assess workers' exposures to nanomaterials; (3) assess and communicate hazards and risks to workers; (4) manage occupational safety and health risks; and (5) foster the safe development of nanotechnology and realization of its societal and commercial benefits. All these criteria are necessary for responsible development to occur. Since it is early in the commercialization of nanotechnology, there are still many unknowns and concerns about nanomaterials. Therefore, it is prudent to treat them as potentially hazardous until sufficient toxicology, and exposure data are gathered for nanomaterial-specific hazard and risk assessments. In this emergent period, it is necessary to be clear about the extent of uncertainty and the need for prudent actions.",
"title": ""
},
{
"docid": "b83eb2f78c4b48cf9b1ca07872d6ea1a",
"text": "Network Function Virtualization (NFV) is emerging as one of the most innovative concepts in the networking landscape. By migrating network functions from dedicated mid-dleboxes to general purpose computing platforms, NFV can effectively reduce the cost to deploy and to operate large networks. However, in order to achieve its full potential, NFV needs to encompass also the radio access network allowing Mobile Virtual Network Operators to deploy custom resource allocation solutions within their virtual radio nodes. Such requirement raises several challenges in terms of performance isolation and resource provisioning. In this work we formalize the Virtual Network Function (VNF) placement problem for radio access networks as an integer linear programming problem and we propose a VNF placement heuristic. Moreover, we also present a proof-of-concept implementation of an NFV management and orchestration framework for Enterprise WLANs. The proposed architecture builds upon a programmable network fabric where pure forwarding nodes are mixed with radio and packet processing nodes leveraging on general computing platforms.",
"title": ""
},
{
"docid": "7697aa5665f4699f2000779db2b0d24f",
"text": "The majority of smart devices used nowadays (e.g., smartphones, laptops, tablets) is capable of both Wi-Fi and Bluetooth wireless communications. Both network interfaces are identified by a unique 48-bits MAC address, assigned during the manufacturing process and unique worldwide. Such addresses, fundamental for link-layer communications and contained in every frame transmitted by the device, can be easily collected through packet sniffing and later used to perform higher level analysis tasks (user tracking, crowd density estimation, etc.). In this work we propose a system to pair the Wi-Fi and Bluetooth MAC addresses belonging to a physical unique device, starting from packets captured through a network of wireless sniffers. We propose several algorithms to perform such a pairing and we evaluate their performance through experiments in a controlled scenario. We show that the proposed algorithms can pair the MAC addresses with good accuracy. The findings of this paper may be useful to improve the precision of indoor localization and crowd density estimation systems and open some questions on the privacy issues of Wi-Fi and Bluetooth enabled devices.",
"title": ""
},
{
"docid": "adf57fe7ec7ab1481561f7664110a1e8",
"text": "This paper presents a scalable 28-GHz phased-array architecture suitable for fifth-generation (5G) communication links based on four-channel ( $2\\times 2$ ) transmit/receive (TRX) quad-core chips in SiGe BiCMOS with flip-chip packaging. Each channel of the quad-core beamformer chip has 4.6-dB noise figure (NF) in the receive (RX) mode and 10.5-dBm output 1-dB compression point (OP1dB) in the transmit (TX) mode with 6-bit phase control and 14-dB gain control. The phase change with gain control is only ±3°, allowing orthogonality between the variable gain amplifier and the phase shifter. The chip has high RX linearity (IP1dB = −22 dBm/channel) and consumes 130 mW in the RX mode and 200 mW in the TX mode at P1dB per channel. Advantages of the scalable all-RF beamforming architecture and circuit design techniques are discussed in detail. 4- and 32-element phased-arrays are demonstrated with detailed data link measurements using a single or eight of the four-channel TRX core chips on a low-cost printed circuit board with microstrip antennas. The 32-element array achieves an effective isotropic radiated power (EIRP) of 43 dBm at P1dB, a 45-dBm saturated EIRP, and a record-level system NF of 5.2 dB when the beamformer loss and transceiver NF are taken into account and can scan to ±50° in azimuth and ±25° in elevation with < −12-dB sidelobes and without any phase or amplitude calibration. A wireless link is demonstrated using two 32-element phased-arrays with a state-of-the-art data rate of 1.0–1.6 Gb/s in a single beam using 16-QAM waveforms over all scan angles at a link distance of 300 m.",
"title": ""
},
{
"docid": "0496af98bbef3d4d6f5e7a67e9ef5508",
"text": "Cancer is second only to heart disease as a cause of death in the US, with a further negative economic impact on society. Over the past decade, details have emerged which suggest that different glycosylphosphatidylinositol (GPI)-anchored proteins are fundamentally involved in a range of cancers. This post-translational glycolipid modification is introduced into proteins via the action of the enzyme GPI transamidase (GPI-T). In 2004, PIG-U, one of the subunits of GPI-T, was identified as an oncogene in bladder cancer, offering a direct connection between GPI-T and cancer. GPI-T is a membrane-bound, multi-subunit enzyme that is poorly understood, due to its structural complexity and membrane solubility. This review is divided into three sections. First, we describe our current understanding of GPI-T, including what is known about each subunit and their roles in the GPI-T reaction. Next, we review the literature connecting GPI-T to different cancers with an emphasis on the variations in GPI-T subunit over-expression. Finally, we discuss some of the GPI-anchored proteins known to be involved in cancer onset and progression and that serve as potential biomarkers for disease-selective therapies. Given that functions for only one of GPI-T's subunits have been robustly assigned, the separation between healthy and malignant GPI-T activity is poorly defined.",
"title": ""
},
{
"docid": "e5f2e7b7dfdfaee33a2187a0a7183cfb",
"text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.",
"title": ""
},
{
"docid": "d60f812bb8036a2220dab8740f6a74c4",
"text": "UNLABELLED\nThe limit of the Colletotrichum gloeosporioides species complex is defined genetically, based on a strongly supported clade within the Colletotrichum ITS gene tree. All taxa accepted within this clade are morphologically more or less typical of the broadly defined C. gloeosporioides, as it has been applied in the literature for the past 50 years. We accept 22 species plus one subspecies within the C. gloeosporioides complex. These include C. asianum, C. cordylinicola, C. fructicola, C. gloeosporioides, C. horii, C. kahawae subsp. kahawae, C. musae, C. nupharicola, C. psidii, C. siamense, C. theobromicola, C. tropicale, and C. xanthorrhoeae, along with the taxa described here as new, C. aenigma, C. aeschynomenes, C. alatae, C. alienum, C. aotearoa, C. clidemiae, C. kahawae subsp. ciggaro, C. salsolae, and C. ti, plus the nom. nov. C. queenslandicum (for C. gloeosporioides var. minus). All of the taxa are defined genetically on the basis of multi-gene phylogenies. Brief morphological descriptions are provided for species where no modern description is available. Many of the species are unable to be reliably distinguished using ITS, the official barcoding gene for fungi. Particularly problematic are a set of species genetically close to C. musae and another set of species genetically close to C. kahawae, referred to here as the Musae clade and the Kahawae clade, respectively. Each clade contains several species that are phylogenetically well supported in multi-gene analyses, but within the clades branch lengths are short because of the small number of phylogenetically informative characters, and in a few cases individual gene trees are incongruent. Some single genes or combinations of genes, such as glyceraldehyde-3-phosphate dehydrogenase and glutamine synthetase, can be used to reliably distinguish most taxa and will need to be developed as secondary barcodes for species level identification, which is important because many of these fungi are of biosecurity significance. In addition to the accepted species, notes are provided for names where a possible close relationship with C. gloeosporioides sensu lato has been suggested in the recent literature, along with all subspecific taxa and formae speciales within C. gloeosporioides and its putative teleomorph Glomerella cingulata.\n\n\nTAXONOMIC NOVELTIES\nName replacement - C. queenslandicum B. Weir & P.R. Johnst. New species - C. aenigma B. Weir & P.R. Johnst., C. aeschynomenes B. Weir & P.R. Johnst., C. alatae B. Weir & P.R. Johnst., C. alienum B. Weir & P.R. Johnst, C. aotearoa B. Weir & P.R. Johnst., C. clidemiae B. Weir & P.R. Johnst., C. salsolae B. Weir & P.R. Johnst., C. ti B. Weir & P.R. Johnst. New subspecies - C. kahawae subsp. ciggaro B. Weir & P.R. Johnst. Typification: Epitypification - C. queenslandicum B. Weir & P.R. Johnst.",
"title": ""
},
{
"docid": "32bb9f12da68d89a897c8fc7937c0a7d",
"text": "In recent years, the videogame industry has been characterized by a great boost in gesture recognition and motion tracking, following the increasing request of creating immersive game experiences. The Microsoft Kinect sensor allows acquiring RGB, IR and depth images with a high frame rate. Because of the complementary nature of the information provided, it has proved an attractive resource for researchers with very different backgrounds. In summer 2014, Microsoft launched a new generation of Kinect on the market, based on time-of-flight technology. This paper proposes a calibration of Kinect for Xbox One imaging sensors, focusing on the depth camera. The mathematical model that describes the error committed by the sensor as a function of the distance between the sensor itself and the object has been estimated. All the analyses presented here have been conducted for both generations of Kinect, in order to quantify the improvements that characterize every single imaging sensor. Experimental results show that the quality of the delivered model improved applying the proposed calibration procedure, which is applicable to both point clouds and the mesh model created with the Microsoft Fusion Libraries.",
"title": ""
},
{
"docid": "549d486d6ff362bc016c6ce449e29dc9",
"text": "Aging is very often associated with magnesium (Mg) deficit. Total plasma magnesium concentrations are remarkably constant in healthy subjects throughout life, while total body Mg and Mg in the intracellular compartment tend to decrease with age. Dietary Mg deficiencies are common in the elderly population. Other frequent causes of Mg deficits in the elderly include reduced Mg intestinal absorption, reduced Mg bone stores, and excess urinary loss. Secondary Mg deficit in aging may result from different conditions and diseases often observed in the elderly (i.e. insulin resistance and/or type 2 diabetes mellitus) and drugs (i.e. use of hypermagnesuric diuretics). Chronic Mg deficits have been linked to an increased risk of numerous preclinical and clinical outcomes, mostly observed in the elderly population, including hypertension, stroke, atherosclerosis, ischemic heart disease, cardiac arrhythmias, glucose intolerance, insulin resistance, type 2 diabetes mellitus, endothelial dysfunction, vascular remodeling, alterations in lipid metabolism, platelet aggregation/thrombosis, inflammation, oxidative stress, cardiovascular mortality, asthma, chronic fatigue, as well as depression and other neuropsychiatric disorders. Both aging and Mg deficiency have been associated to excessive production of oxygen-derived free radicals and low-grade inflammation. Chronic inflammation and oxidative stress are also present in several age-related diseases, such as many vascular and metabolic conditions, as well as frailty, muscle loss and sarcopenia, and altered immune responses, among others. Mg deficit associated to aging may be at least one of the pathophysiological links that may help to explain the interactions between inflammation and oxidative stress with the aging process and many age-related diseases.",
"title": ""
},
{
"docid": "939f05a2265c6ab21b273a8127806279",
"text": "Acne is a common inflammatory disease. Scarring is an unwanted end point of acne. Both atrophic and hypertrophic scar types occur. Soft-tissue augmentation aims to improve atrophic scars. In this review, we will focus on the use of dermal fillers for acne scar improvement. Therefore, various filler types are characterized, and available data on their use in acne scar improvement are analyzed.",
"title": ""
}
] | scidocsrr |
9964a76f995125776e2fc1a30d248fec | The dawn of the liquid biopsy in the fight against cancer | [
{
"docid": "aa234355d0b0493e1d8c7a04e7020781",
"text": "Cancer is associated with mutated genes, and analysis of tumour-linked genetic alterations is increasingly used for diagnostic, prognostic and treatment purposes. The genetic profile of solid tumours is currently obtained from surgical or biopsy specimens; however, the latter procedure cannot always be performed routinely owing to its invasive nature. Information acquired from a single biopsy provides a spatially and temporally limited snap-shot of a tumour and might fail to reflect its heterogeneity. Tumour cells release circulating free DNA (cfDNA) into the blood, but the majority of circulating DNA is often not of cancerous origin, and detection of cancer-associated alleles in the blood has long been impossible to achieve. Technological advances have overcome these restrictions, making it possible to identify both genetic and epigenetic aberrations. A liquid biopsy, or blood sample, can provide the genetic landscape of all cancerous lesions (primary and metastases) as well as offering the opportunity to systematically track genomic evolution. This Review will explore how tumour-associated mutations detectable in the blood can be used in the clinic after diagnosis, including the assessment of prognosis, early detection of disease recurrence, and as surrogates for traditional biopsies with the purpose of predicting response to treatments and the development of acquired resistance.",
"title": ""
}
] | [
{
"docid": "fc9eae18a5a44ee7df22d6c7bdb5a164",
"text": "In this paper, methods are shown how to adapt invertible two-dimensional chaotic maps on a torus or on a square to create new symmetric block encryption schemes. A chaotic map is first generalized by introducing parameters and then discretized to a finite square lattice of points which represent pixels or some other data items. Although the discretized map is a permutation and thus cannot be chaotic, it shares certain properties with its continuous counterpart as long as the number of iterations remains small. The discretized map is further extended to three dimensions and composed with a simple diffusion mechanism. As a result, a symmetric block product encryption scheme is obtained. To encrypt an N × N image, the ciphering map is iteratively applied to the image. The construction of the cipher and its security is explained with the two-dimensional Baker map. It is shown that the permutations induced by the Baker map behave as typical random permutations. Computer simulations indicate that the cipher has good diffusion properties with respect to the plain-text and the key. A nontraditional pseudo-random number generator based on the encryption scheme is described and studied. Examples of some other two-dimensional chaotic maps are given and their suitability for secure encryption is discussed. The paper closes with a brief discussion of a possible relationship between discretized chaos and cryptosystems.",
"title": ""
},
{
"docid": "1bfc1972a32222a1b5816bb040040374",
"text": "BACKGROUND\nSkeletal muscle is key to motor development and represents a major metabolic end organ that aids glycaemic regulation.\n\n\nOBJECTIVES\nTo create gender-specific reference curves for fat-free mass (FFM) and appendicular (limb) skeletal muscle mass (SMMa) in children and adolescents. To examine the muscle-to-fat ratio in relation to body mass index (BMI) for age and gender.\n\n\nMETHODS\nBody composition was measured by segmental bioelectrical impedance (BIA, Tanita BC418) in 1985 Caucasian children aged 5-18.8 years. Skeletal muscle mass data from the four limbs were used to derive smoothed centile curves and the muscle-to-fat ratio.\n\n\nRESULTS\nThe centile curves illustrate the developmental patterns of %FFM and SMMa. While the %FFM curves differ markedly between boys and girls, the SMMa (kg), %SMMa and %SMMa/FFM show some similarities in shape and variance, together with some gender-specific characteristics. Existing BMI curves do not reveal these gender differences. Muscle-to-fat ratio showed a very wide range with means differing between boys and girls and across fifths of BMI z-score.\n\n\nCONCLUSIONS\nBIA assessment of %FFM and SMMa represents a significant advance in nutritional assessment since these body composition components are associated with metabolic health. Muscle-to-fat ratio has the potential to provide a better index of future metabolic health.",
"title": ""
},
{
"docid": "32817233f5aa05036ca292e7b57143fb",
"text": "Asphalt pavement distresses have significant importance in roads and highways. This paper addresses the detection and localization of one of the key pavement distresses, the potholes using computer vision. Different kinds of pothole and non-pothole images from asphalt pavement are considered for experimentation. Considering the appearance-shape based nature of the potholes, Histograms of oriented gradients (HOG) features are computed for the input images. Features are trained and classified using Naïve Bayes classifier resulting in labeling of the input as pothole or non-pothole image. To locate the pothole in the detected pothole images, normalized graph cut segmentation scheme is employed. Proposed scheme is tested on a dataset having broad range of pavement images. Experimentation results showed 90 % accuracy for the detection of pothole images and high recall for the localization of pothole in the detected images.",
"title": ""
},
{
"docid": "6851e4355ab4825b0eb27ac76be2329f",
"text": "Segmentation of novel or dynamic objects in a scene, often referred to as “background subtraction” or “foreground segmentation”, is a critical early in step in most computer vision applications in domains such as surveillance and human-computer interaction. All previously described, real-time methods fail to handle properly one or more common phenomena, such as global illumination changes, shadows, inter-reflections, similarity of foreground color to background, and non-static backgrounds (e.g. active video displays or trees waving in the wind). The recent advent of hardware and software for real-time computation of depth imagery makes better approaches possible. We propose a method for modeling the background that uses per-pixel, time-adaptive, Gaussian mixtures in the combined input space of depth and luminance-invariant color. This combination in itself is novel, but we further improve it by introducing the ideas of 1) modulating the background model learning rate based on scene activity, and 2) making colorbased segmentation criteria dependent on depth observations. Our experiments show that the method possesses much greater robustness to problematic phenomena than the prior state-of-the-art, without sacrificing real-time performance, making it well-suited for a wide range of practical applications in video event detection and recognition.",
"title": ""
},
{
"docid": "b72bc9ee1c32ec3d268abd1d3e51db25",
"text": "As a newly developing academic domain, researches on Mobile learning are still in their initial stage. Meanwhile, M-blackboard comes from Mobile learning. This study attempts to discover the factors impacting the intention to adopt mobile blackboard. Eleven selected model on the Mobile learning adoption were comprehensively reviewed. From the reviewed articles, the most factors are identified. Also, from the frequency analysis, the most frequent factors in the Mobile blackboard or Mobile learning adoption studies are performance expectancy, effort expectancy, perceived playfulness, facilitating conditions, self-management, cost and past experiences. The descriptive statistic was performed to gather the respondents’ demographic information. It also shows that the respondents agreed on nearly every statement item. Pearson correlation and regression analysis were also conducted.",
"title": ""
},
{
"docid": "0dd4f05f9bd3d582b9fb9c64f00ed697",
"text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.",
"title": ""
},
{
"docid": "c26eabb377db5f1033ec6d354d890a6f",
"text": "Recurrent neural networks have recently shown significant potential in different language applications, ranging from natural language processing to language modelling. This paper introduces a research effort to use such networks to develop and evaluate natural language acquisition on a humanoid robot. Here, the problem is twofold. First, the focus will be put on using the gesture-word combination stage observed in infants to transition from single to multi-word utterances. Secondly, research will be carried out in the domain of connecting action learning with language learning. In the former, the long-short term memory architecture will be implemented, whilst in the latter multiple time-scale recurrent neural networks will be used. This will allow for comparison between the two architectures, whilst highlighting the strengths and shortcomings of both with respect to the language learning problem. Here, the main research efforts, challenges and expected outcomes are described.",
"title": ""
},
{
"docid": "665fb08aba7cc1a2d6680bccb259396f",
"text": "Sample entropy (SampEn) has been proposed as a method to overcome limitations associated with approximate entropy (ApEn). The initial paper describing the SampEn metric included a characterization study comparing both ApEn and SampEn against theoretical results and concluded that SampEn is both more consistent and agrees more closely with theory for known random processes than ApEn. SampEn has been used in several studies to analyze the regularity of clinical and experimental time series. However, questions regarding how to interpret SampEn in certain clinical situations and its relationship to classical signal parameters remain unanswered. In this paper we report the results of a characterization study intended to provide additional insights regarding the interpretability of SampEn in the context of biomedical signal analysis.",
"title": ""
},
{
"docid": "323d633995296611c903874aefa5cdb7",
"text": "This paper investigates the possibility of communicating through vibrations. By modulating the vibration motors available in all mobile phones, and decoding them through accelerometers, we aim to communicate small packets of information. Of course, this will not match the bit rates available through RF modalities, such as NFC or Bluetooth, which utilize a much larger bandwidth. However, where security is vital, vibratory communication may offer advantages. We develop Ripple, a system that achieves up to 200 bits/s of secure transmission using off-the-shelf vibration motor chips, and 80 bits/s on Android smartphones. This is an outcome of designing and integrating a range of techniques, including multicarrier modulation, orthogonal vibration division, vibration braking, side-channel jamming, etc. Not all these techniques are novel; some are borrowed and suitably modified for our purposes, while others are unique to this relatively new platform of vibratory communication.",
"title": ""
},
{
"docid": "ccd356a943f19024478c42b5db191293",
"text": "This paper discusses the relationship between concepts of narrative, patterns of interaction within computer games constituting gameplay gestalts, and the relationship between narrative and the gameplay gestalt. The repetitive patterning involved in gameplay gestalt formation is found to undermine deep narrative immersion. The creation of stronger forms of interactive narrative in games requires the resolution of this confl ict. The paper goes on to describe the Purgatory Engine, a game engine based upon more fundamentally dramatic forms of gameplay and interaction, supporting a new game genre referred to as the fi rst-person actor. The fi rst-person actor does not involve a repetitive gestalt mode of gameplay, but defi nes gameplay in terms of character development and dramatic interaction.",
"title": ""
},
{
"docid": "34b7073f947888694053cb421544cb37",
"text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"title": ""
},
{
"docid": "d7a85bedea94e2e70f9ad52c6247f8d3",
"text": "Little is known about the perception of artificial spatial hearing by hearing-impaired subjects. The purpose of this study was to investigate how listeners with hearing disorders perceived the effect of a spatialization feature designed for wireless microphone systems. Forty listeners took part in the experiments. They were arranged in four groups: normal-hearing, moderate, severe, and profound hearing loss. Their performance in terms of speech understanding and speaker localization was assessed with diotic and binaural stimuli. The results of the speech intelligibility experiment revealed that the subjects presenting a moderate or severe hearing impairment better understood speech with the spatialization feature. Thus, it was demonstrated that the conventional diotic binaural summation operated by current wireless systems can be transformed to reproduce the spatial cues required to localize the speaker, without any loss of intelligibility. The speaker localization experiment showed that a majority of the hearing-impaired listeners had similar performance with natural and artificial spatial hearing, contrary to the normal-hearing listeners. This suggests that certain subjects with hearing impairment preserve their localization abilities with approximated generic head-related transfer functions in the frontal horizontal plane.",
"title": ""
},
{
"docid": "8d071dbd68902f3bac18e61caa0828dd",
"text": "This paper demonstrates that it is possible to construct the Stochastic flash ADC using standard digital cells. In order to minimize the analog circuit requirements which cost high, it is appropriate to begin the architecture with highly digital. The proposed Stochastic flash ADC uses a random comparator offset to set the trip points. Since the comparator are no longer sized for small offset, they can be shrunk down into digital cells. Using comparators that are implemented as digital cells produces a large variation of comparator offset. Typically, this is considered a disadvantage, but in our case, this large standard deviation of offset is used to set the input signal range. By designing an ADC that is made up entirely of digital cells, it is natural candidate for a synthesizable ADC. The analog comparator which is used in this ADC is constructed from standard digital NAND gates connected with SR latch to minimize the memory effects. A Wallace tree adder is used to sum the total number of comparator output, since the order of comparator output is random. Thus, all the components including the comparator and Wallace tree adder can be implemented using standard digital cells. [1] INTRODUCTION As CMOS designs are scaled to smaller technology nodes, many benefits arise, as well as challenges. There are benefits in speed and power due to decreased capacitance and lower supply voltage, yet reduction in intrinsic device gain and lower supply voltage make it difficult to migrate previous analog designs to smaller scaled processes. Moreover, as scaling trends continue, the analog portion of a mixed-signal system tends to consume proportionally more power and area and have a higher design cost than the digital counterpart. This tends to increase the overall design cost of the mixed-signal design. Automatically synthesized digital circuits get all the benefits of scaling, but analog circuits get these benefits at a large cost. The most essential component of ADC is the comparator, which translates from the analog world to digital world. Since comparator defines the boundary between analog and digital realms, the flash ADC architecture will be considered, as it places the comparator as close to the analog input signal. Flash ADCs use a reference ladder to generate the comparator trip points that correspond to each digital code. Typically the references are either generated by a resistor ladder or some form of analog interpolation, but the effect is the same: a …",
"title": ""
},
{
"docid": "4100a10b2a03f3a1ba712901cee406d2",
"text": "Traditionally, many clinicians tend to forego esthetic considerations when full-coverage restorations are indicated for pediatric patients with primary dentitions. However, the availability of new zirconia pediatric crowns and reliable techniques for cementation makes esthetic outcomes practical and consistent when restoring primary dentition. Two cases are described: a 3-year-old boy who presented with severe early childhood caries affecting both anterior and posterior teeth, and a 6-year-old boy who presented with extensive caries of his primary posterior dentition, including a molar requiring full coverage. The parents of both boys were concerned about esthetics, and the extent of decay indicated the need for full-coverage restorations. This led to the boys receiving treatment using a restorative procedure in which the carious teeth were prepared for and restored with esthetic tooth-colored zirconia crowns. In both cases, comfortable function and pleasing esthetics were achieved.",
"title": ""
},
{
"docid": "b6b9e1eaf17f6cdbc9c060e467021811",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "95d624c86fcd86377e46738689bb18a8",
"text": "EEG desynchronization is a reliable correlate of excited neural structures of activated cortical areas. EEG synchronization within the alpha band may be an electrophysiological correlate of deactivated cortical areas. Such areas are not processing sensory information or motor output and can be considered to be in an idling state. One example of such an idling cortical area is the enhancement of mu rhythms in the primary hand area during visual processing or during foot movement. In both circumstances, the neurons in the hand area are not needed for visual processing or preparation for foot movement. As a result of this, an enhanced hand area mu rhythm can be observed.",
"title": ""
},
{
"docid": "827e9045f932b146a8af66224e114be6",
"text": "Using a common set of attributes to determine which methodology to use in a particular data warehousing project.",
"title": ""
},
{
"docid": "569fed958b7a471e06ce718102687a1e",
"text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.",
"title": ""
},
{
"docid": "81b5379abf3849e1ae4e233fd4955062",
"text": "Three-phase dc/dc converters have the superior characteristics including lower current rating of switches, the reduced output filter requirement, and effective utilization of transformers. To further reduce the voltage stress on switches, three-phase three-level (TPTL) dc/dc converters have been investigated recently; however, numerous active power switches result in a complicated configuration in the available topologies. Therefore, a novel TPTL dc/dc converter adopting a symmetrical duty cycle control is proposed in this paper. Compared with the available TPTL converters, the proposed converter has fewer switches and simpler configuration. The voltage stress on all switches can be reduced to the half of the input voltage. Meanwhile, the ripple frequency of output current can be increased significantly, resulting in a reduced filter requirement. Experimental results from a 540-660-V input and 48-V/20-A output are presented to verify the theoretical analysis and the performance of the proposed converter.",
"title": ""
},
{
"docid": "9c16f3ccaab4e668578e3eda7d452ebd",
"text": "Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener’s perception of the audio clip as evaluated in our human study.",
"title": ""
}
] | scidocsrr |
8b23a893d4cb1ebc5060bafc3c45d1bd | How to Make a Digital Currency on a Blockchain Stable | [
{
"docid": "11e19b59fa2df88f3468b4e71aab8cf4",
"text": "Blockchain is a distributed timestamp server technology introduced for realization of Bitcoin, a digital cash system. It has been attracting much attention especially in the areas of financial and legal applications. But such applications would fail if they are designed without knowledge of the fundamental differences in blockchain from existing technology. We show that blockchain is a probabilistic state machine in which participants can never commit on decisions, we also show that this probabilistic nature is necessarily deduced from the condition where the number of participants remains unknown. This work provides useful abstractions to think about blockchain, and raises discussion for promoting the better use of the technology.",
"title": ""
}
] | [
{
"docid": "9a4a519023175802578dad5864b3dd01",
"text": "The problem of efficiently finding the best match for a query in a given set with respect to the Euclidean distance or the cosine similarity has been extensively studied. However, the closely related problem of efficiently finding the best match with respect to the inner-product has never been explored in the general setting to the best of our knowledge. In this paper we consider this problem and contrast it with the previous problems considered. First, we propose a general branch-and-bound algorithm based on a (single) tree data structure. Subsequently, we present a dual-tree algorithm for the case where there are multiple queries. Our proposed branch-and-bound algorithms are based on novel inner-product bounds. Finally we present a new data structure, the cone tree, for increasing the efficiency of the dual-tree algorithm. We evaluate our proposed algorithms on a variety of data sets from various applications, and exhibit up to five orders of magnitude improvement in query time over the naive search technique in some cases.",
"title": ""
},
{
"docid": "6cf97825d649a4f7518be9b72ea8f19f",
"text": "This paper proposes a distributed discrete-time algorithm to solve an additive cost optimization problem over undirected deterministic or time-varying graphs. Different from most previous methods that require to exchange exact states between nodes, each node in our algorithm needs only the sign of the relative state between its neighbors, which is clearly one bit of information. Our analysis is based on optimization theory rather than Lyapunov theory or algebraic graph theory. The latter is commonly used in existing literature, especially in the continuous-time algorithm design, and is difficult to apply in our case. Besides, an optimization-theory-based analysis may make our results more extendible. In particular, our convergence proofs are based on the convergences of the subgradient method and the stochastic subgradient method. Moreover, the convergence rate of our algorithm can vary from $O(1/\\ln(k))$ to $O(1/\\sqrt{k})$, depending on the choice of the stepsize. A quantile regression problem is included to illustrate the performance of our algorithm using simulations.",
"title": ""
},
{
"docid": "4b494016220eb5442642e34c3ed2d720",
"text": "BACKGROUND\nTreatments for alopecia are in high demand, but not all are safe and reliable. Dalteparin and protamine microparticles (D/P MPs) can effectively carry growth factors (GFs) in platelet-rich plasma (PRP).\n\n\nOBJECTIVE\nTo identify the effects of PRP-containing D/P MPs (PRP&D/P MPs) on hair growth.\n\n\nMETHODS & MATERIALS\nParticipants were 26 volunteers with thin hair who received five local treatments of 3 mL of PRP&D/P MPs (13 participants) or PRP and saline (control, 13 participants) at 2- to 3-week intervals and were evaluated for 12 weeks. Injected areas comprised frontal or parietal sites with lanugo-like hair. Experimental and control areas were photographed. Consenting participants underwent biopsies for histologic examination.\n\n\nRESULTS\nD/P MPs bind to various GFs contained in PRP. Significant differences were seen in hair cross-section but not in hair numbers in PRP and PRP&D/P MP injections. The addition of D/P MPs to PRP resulted in significant stimulation in hair cross-section. Microscopic findings showed thickened epithelium, proliferation of collagen fibers and fibroblasts, and increased vessels around follicles.\n\n\nCONCLUSION\nPRP&D/P MPs and PRP facilitated hair growth but D/P MPs provided additional hair growth. The authors have indicated no significant interest with commercial supporters.",
"title": ""
},
{
"docid": "97dfc67c63e7e162dd06d5cb2959912a",
"text": "To examine the pattern of injuries in cases of fatal shark attack in South Australian waters, the authors examined the files of their institution for all cases of shark attack in which full autopsies had been performed over the past 25 years, from 1974 to 1998. Of the seven deaths attributed to shark attack during this period, full autopsies were performed in only two cases. In the remaining five cases, bodies either had not been found or were incomplete. Case 1 was a 27-year-old male surfer who had been attacked by a shark. At autopsy, the main areas of injury involved the right thigh, which displayed characteristic teeth marks, extensive soft tissue damage, and incision of the femoral artery. There were also incised wounds of the right wrist. Bony injury was minimal, and no shark teeth were recovered. Case 2 was a 26-year-old male diver who had been attacked by a shark. At autopsy, the main areas of injury involved the left thigh and lower leg, which displayed characteristic teeth marks, extensive soft tissue damage, and incised wounds of the femoral artery and vein. There was also soft tissue trauma to the left wrist, with transection of the radial artery and vein. Bony injury was minimal, and no shark teeth were recovered. In both cases, death resulted from exsanguination following a similar pattern of soft tissue and vascular damage to a leg and arm. This type of injury is in keeping with predator attack from underneath or behind, with the most severe injuries involving one leg. Less severe injuries to the arms may have occurred during the ensuing struggle. Reconstruction of the damaged limb in case 2 by sewing together skin, soft tissue, and muscle bundles not only revealed that no soft tissue was missing but also gave a clearer picture of the pattern of teeth marks, direction of the attack, and species of predator.",
"title": ""
},
{
"docid": "3cd383e547b01040261dc1290d87b02e",
"text": "Abnormal condition in a power system generally leads to a fall in system frequency, and it leads to system blackout in an extreme condition. This paper presents a technique to develop an auto load shedding and islanding scheme for a power system to prevent blackout and to stabilize the system under any abnormal condition. The technique proposes the sequence and conditions of the applications of different load shedding schemes and islanding strategies. It is developed based on the international current practices. It is applied to the Bangladesh Power System (BPS), and an auto load-shedding and islanding scheme is developed. The effectiveness of the developed scheme is investigated simulating different abnormal conditions in BPS.",
"title": ""
},
{
"docid": "62c6050db8e42b1de54f8d1d54fd861f",
"text": "In this paper we present our approach of solving the PAN 2016 Author Profiling Task. It involves classifying users’ gender and age using social media posts. We used SVM classifiers and neural networks on TF-IDF and verbosity features. Results showed that SVM classifiers are better for English datasets and neural networks perform better for Dutch and Spanish datasets.",
"title": ""
},
{
"docid": "d477e2a2678de720c57895bf1d047c4b",
"text": "Interpreting predictions from tree ensemble methods such as gradient boosting machines and random forests is important, yet feature attribution for trees is often heuristic and not individualized for each prediction. Here we show that popular feature attribution methods are inconsistent, meaning they can lower a feature’s assigned importance when the true impact of that feature actually increases. This is a fundamental problem that casts doubt on any comparison between features. To address it we turn to recent applications of game theory and develop fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values. We then extend SHAP values to interaction effects and define SHAP interaction values. We propose a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique “supervised” clustering (clustering based on feature attributions). We demonstrate better agreement with human intuition through a user study, exponential improvements in run time, improved clustering performance, and better identification of influential features. An implementation of our algorithm has also been merged into XGBoost and LightGBM, see http://github.com/slundberg/shap for details. ACM Reference Format: Scott M. Lundberg, Gabriel G. Erion, and Su-In Lee. 2018. Consistent Individualized Feature Attribution for Tree Ensembles. In Proceedings of ACM (KDD’18). ACM, New York, NY, USA, 9 pages. https://doi.org/none",
"title": ""
},
{
"docid": "d29eba4f796cb642d64e73b76767e59d",
"text": "In this paper, a novel segmentation and recognition approach to automatically extract street lighting poles from mobile LiDAR data is proposed. First, points on or around the ground are extracted and removed through a piecewise elevation histogram segmentation method. Then, a new graph-cut-based segmentation method is introduced to extract the street lighting poles from each cluster obtained through a Euclidean distance clustering algorithm. In addition to the spatial information, the street lighting pole's shape and the point's intensity information are also considered to formulate the energy function. Finally, a Gaussian-mixture-model-based method is introduced to recognize the street lighting poles from the candidate clusters. The proposed approach is tested on several point clouds collected by different mobile LiDAR systems. Experimental results show that the proposed method is robust to noises and achieves an overall performance of 90% in terms of true positive rate.",
"title": ""
},
{
"docid": "3f5c761e5c5dbfd5aa1d1d9af736e5fd",
"text": "In this paper, a double L-slot microstrip patch antenna array using Coplanar waveguide feed for Wireless Local Area Network (WLAN) and Worldwide Interoperability for Microwave Access (WiMAX) frequency bands are presented. The proposed antenna is fabricated on Aluminum Nitride Ceramic substrate with dielectric constant 8.8 and thickness of 1.5mm. The key feature of this substrate is that it can withstand in high temperature. The return loss is about -31dB at the operating frequency of 3.6GHz with 50Ω input impedance. The basic parameters of the proposed antenna such as return loss, VSWR, and radiation pattern are simulated using Ansoft HFSS. Simulation results of antenna parameters of single patch and double patch antenna array are analyzed and presented.",
"title": ""
},
{
"docid": "0bd720d912575c0810c65d04f6b1712b",
"text": "Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.",
"title": ""
},
{
"docid": "b2032f8912fac19b18bc5a836c3536e9",
"text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.",
"title": ""
},
{
"docid": "5e64e36e76f4c0577ae3608b6e715a1f",
"text": "Deep learning has recently become very popular on account of its incredible success in many complex datadriven applications, including image classification and speech recognition. The database community has worked on data-driven applications for many years, and therefore should be playing a lead role in supporting this new wave. However, databases and deep learning are different in terms of both techniques and applications. In this paper, we discuss research problems at the intersection of the two fields. In particular, we discuss possible improvements for deep learning systems from a database perspective, and analyze database applications that may benefit from deep learning techniques.",
"title": ""
},
{
"docid": "8a50b086b61e19481cc3dee78a785f09",
"text": "A new approach to the online classification of streaming data is introduced in this paper. It is based on a self-developing (evolving) fuzzy-rule-based (FRB) classifier system of Takagi-Sugeno ( eTS) type. The proposed approach, called eClass (evolving class ifier), includes different architectures and online learning methods. The family of alternative architectures includes: 1) eClass0, with the classifier consequents representing class label and 2) the newly proposed method for regression over the features using a first-order eTS fuzzy classifier, eClass1. An important property of eClass is that it can start learning ldquofrom scratch.rdquo Not only do the fuzzy rules not need to be prespecified, but neither do the number of classes for eClass (the number may grow, with new class labels being added by the online learning process). In the event that an initial FRB exists, eClass can evolve/develop it further based on the newly arrived data. The proposed approach addresses the practical problems of the classification of streaming data (video, speech, sensory data generated from robotic, advanced industrial applications, financial and retail chain transactions, intruder detection, etc.). It has been successfully tested on a number of benchmark problems as well as on data from an intrusion detection data stream to produce a comparison with the established approaches. The results demonstrate that a flexible (with evolving structure) FRB classifier can be generated online from streaming data achieving high classification rates and using limited computational resources.",
"title": ""
},
{
"docid": "7ba0a2631c104e80c43aba739567b248",
"text": "We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter β characterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a log(n) factor. We also provide extensions to several important cases: when β is unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.",
"title": ""
},
{
"docid": "8f876345827e55e8ff241afa99c6bb70",
"text": "Reef-building corals occur as a range of colour morphs because of varying types and concentrations of pigments within the host tissues, but little is known about their physiological or ecological significance. Here, we examined whether specific host pigments act as an alternative mechanism for photoacclimation in the coral holobiont. We used the coral Montipora monasteriata (Forskål 1775) as a case study because it occurs in multiple colour morphs (tan, blue, brown, green and red) within varying light-habitat distributions. We demonstrated that two of the non-fluorescent host pigments are responsive to changes in external irradiance, with some host pigments up-regulating in response to elevated irradiance. This appeared to facilitate the retention of antennal chlorophyll by endosymbionts and hence, photosynthetic capacity. Specifically, net P(max) Chl a(-1) correlated strongly with the concentration of an orange-absorbing non-fluorescent pigment (CP-580). This had major implications for the energetics of bleached blue-pigmented (CP-580) colonies that maintained net P(max) cm(-2) by increasing P(max) Chl a(-1). The data suggested that blue morphs can bleach, decreasing their symbiont populations by an order of magnitude without compromising symbiont or coral health.",
"title": ""
},
{
"docid": "d01198e88f91a47a1777337d0db41939",
"text": "Ultra low quiescent, wide output current range low-dropout regulators (LDO) are in high demand in portable applications to extend battery lives. This paper presents a 500 nA quiescent, 0 to 100 mA load, 3.5–7 V input to 3 V output LDO in a digital 0.35 μm 2P3M CMOS technology. The challenges in designing with nano-ampere of quiescent current are discussed, namely the leakage, the parasitics, and the excessive DC gain. CMOS super source follower voltage buffer and input excessive gain reduction are then proposed. The LDO is internally compensated using Ahuja method with a minimum phase margin of 55° across all load conditions. The maximum transient voltage variation is less than 150 and 75 mV when used with 1 and 10 μF external capacitor. Compared with existing work, this LDO achieves the best transient flgure-of-merit with close to best dynamic current efficiency (maximum-to-quiescent current ratio).",
"title": ""
},
{
"docid": "6fd8226482617b0997640b8783ad2445",
"text": "OBJECTIVES\nThis article presents a new tool that helps systematic reviewers to extract and compare implementation data across primary trials. Currently, systematic review guidance does not provide guidelines for the identification and extraction of data related to the implementation of the underlying interventions.\n\n\nSTUDY DESIGN AND SETTING\nA team of systematic reviewers used a multistaged consensus development approach to develop this tool. First, a systematic literature search on the implementation and synthesis of clinical trial evidence was performed. The team then met in a series of subcommittees to develop an initial draft index. Drafts were presented at several research conferences and circulated to methodological experts in various health-related disciplines for feedback. The team systematically recorded, discussed, and incorporated all feedback into further revisions. A penultimate draft was discussed at the 2010 Cochrane-Campbell Collaboration Colloquium to finalize its content.\n\n\nRESULTS\nThe Oxford Implementation Index provides a checklist of implementation data to extract from primary trials. Checklist items are organized into four domains: intervention design, actual delivery by trial practitioners, uptake of the intervention by participants, and contextual factors. Systematic reviewers piloting the index at the Cochrane-Campbell Colloquium reported that the index was helpful for the identification of implementation data.\n\n\nCONCLUSION\nThe Oxford Implementation Index provides a framework to help reviewers assess implementation data across trials. Reviewers can use this tool to identify implementation data, extract relevant information, and compare features of implementation across primary trials in a systematic review. The index is a work-in-progress, and future efforts will focus on refining the index, improving usability, and integrating the index with other guidance on systematic reviewing.",
"title": ""
},
{
"docid": "318938c2dd173a511d03380826d31bd9",
"text": "The theory and construction of the HP-1430A feed-through sampling head are reviewed, and a model for the sampling head is developed from dimensional and electrical measurements in conjunction with electromagnetic, electronic, and network theory. The model was used to predict the sampling-head step response needed for the deconvolution of true input waveforms. The dependence of the sampling-head step response on the sampling diode bias is investigated. Calculations based on the model predict step response transition durations of 27.5 to 30.5 ps for diode reverse bias values of -1.76 to -1.63 V.",
"title": ""
},
{
"docid": "2276f5bd8866d54128bd1782a748eb43",
"text": "8.5 Printing 304 8.5.1 Overview 304 8.5.2 Inks and subtractive color calculations 304 8.5.2.1 Density 305 8.5.3 Continuous tone printing 306 8.5.4 Halftoning 307 8.5.4.1 Traditional halftoning 307 8.5.5 Digital halftoning 308 8.5.5.1 Cluster dot dither 310 8.5.5.2 Bayer dither and void and cluster dither 310 8.5.5.3 Error diffusion 311 8.5.5.4 Color digital halftoning 312 8.5.6 Print characterization 313 8.5.6.1 Transduction: the tone reproduction curve 313 8.6",
"title": ""
},
{
"docid": "93151277f8325a15c569d77dc973c1a8",
"text": "A class of binary quasi-cyclic burst error-correcting codes based upon product codes is studied. An expression for the maximum burst error-correcting capability for each code in the class is given. In certain cases the codes reduce to Gilbert codes, which are cyclic. Often codes exist in the class which have the same block length and number of check bits as the Gilbert codes but correct longer bursts of errors than Gilbert codes. By shortening the codes, it is possible to design codes which achieve the Reiger bound.",
"title": ""
}
] | scidocsrr |
10debb17e51145a4ff0adf56e6609281 | A new sentence similarity measure and sentence based extractive technique for automatic text summarization | [
{
"docid": "639bbe7b640c514ab405601c7c3cfa01",
"text": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"title": ""
},
{
"docid": "91c024a832bfc07bc00b7086bcf77add",
"text": "Topic-focused multi-document summarization aims to produce a summary biased to a given topic or user profile. This paper presents a novel extractive approach based on manifold-ranking of sentences to this summarization task. The manifold-ranking process can naturally make full use of both the relationships among all the sentences in the documents and the relationships between the given topic and the sentences. The ranking score is obtained for each sentence in the manifold-ranking process to denote the biased information richness of the sentence. Then the greedy algorithm is employed to impose diversity penalty on each sentence. The summary is produced by choosing the sentences with both high biased information richness and high information novelty. Experiments on DUC2003 and DUC2005 are performed and the ROUGE evaluation results show that the proposed approach can significantly outperform existing approaches of the top performing systems in DUC tasks and baseline approaches.",
"title": ""
}
] | [
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "b9aaab241bab9c11ac38d6e9188b7680",
"text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.",
"title": ""
},
{
"docid": "ea4a1405e1c6444726d1854c7c56a30d",
"text": "This paper presents a novel integrated approach for efficient optimization based online trajectory planning of topologically distinctive mobile robot trajectories. Online trajectory optimization deforms an initial coarse path generated by a global planner by minimizing objectives such as path length, transition time or control effort. Kinodynamic motion properties of mobile robots and clearance from obstacles impose additional equality and inequality constraints on the trajectory optimization. Local planners account for efficiency by restricting the search space to locally optimal solutions only. However, the objective function is usually non-convex as the presence of obstacles generates multiple distinctive local optima. The proposed method maintains and simultaneously optimizes a subset of admissible candidate trajectories of distinctive topologies and thus seeking the overall best candidate among the set of alternative local solutions. Time-optimal trajectories for differential-drive and carlike robots are obtained efficiently by adopting the Timed-Elastic-Band approach for the underlying trajectory optimization problem. The investigation of various example scenarios and a comparative analysis with conventional local planners confirm the advantages of integrated exploration, maintenance and optimization of topologically distinctive trajectories. ∗Corresponding author Email address: [email protected] (Christoph Rösmann) Preprint submitted to Robotics and Autonomous Systems November 12, 2016",
"title": ""
},
{
"docid": "e87a799822f1012f032cb66cd2925604",
"text": "Curcumin, the yellow color pigment of turmeric, is produced industrially from turmeric oleoresin. The mother liquor after isolation of curcumin from oleoresin contains approximately 40% oil. The oil was extracted from the mother liquor using hexane at 60 degrees C, and the hexane extract was separated into three fractions using silica gel column chromatography. These fractions were tested for antibacterial activity by pour plate method against Bacillus cereus, Bacillus coagulans, Bacillus subtilis, Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa. Fraction II eluted with 5% ethyl acetate in hexane was found to be most active fraction. The turmeric oil, fraction I, and fraction II were analyzed by GC and GC-MS. ar-Turmerone, turmerone, and curlone were found to be the major compounds present in these fractions along with other oxygenated compounds.",
"title": ""
},
{
"docid": "4654a1926d0caa787ade6aaf58e00474",
"text": "GitHub is the most widely used social, distributed version control system. It has around 10 million registered users and hosts over 16 million public repositories. Its user base is also very active as GitHub ranks in the top 100 Alexa most popular websites. In this study, we collect GitHub’s state in its entirety. Doing so, allows us to study new aspects of the ecosystem. Although GitHub is the home to millions of users and repositories, the analysis of users’ activity time-series reveals that only around 10% of them can be considered active. The collected dataset allows us to investigate the popularity of programming languages and existence of pattens in the relations between users, repositories, and programming languages. By, applying a k-means clustering method to the usersrepositories commits matrix, we find that two clear clusters of programming languages separate from the remaining. One cluster forms for “web programming” languages (Java Script, Ruby, PHP, CSS), and a second for “system oriented programming” languages (C, C++, Python). Further classification, allow us to build a phylogenetic tree of the use of programming languages in GitHub. Additionally, we study the main and the auxiliary programming languages of the top 1000 repositories in more detail. We provide a ranking of these auxiliary programming languages using various metrics, such as percentage of lines of code, and PageRank.",
"title": ""
},
{
"docid": "41481b2f081831d28ead1b685465d535",
"text": "Triticum aestivum (Wheat grass juice) has high concentrations of chlorophyll, amino acids, minerals, vitamins, and enzymes. Fresh juice has been shown to possess anti-cancer activity, anti-ulcer activity, anti-inflammatory, antioxidant activity, anti-arthritic activity, and blood building activity in Thalassemia. It has been argued that wheat grass helps blood flow, digestion, and general detoxification of the body due to the presence of biologically active compounds and minerals in it and due to its antioxidant potential which is derived from its high content of bioflavonoids such as apigenin, quercitin, luteoline. Furthermore, indole compounds, amely choline, which known for antioxidants and also possess chelating property for iron overload disorders. The presence of 70% chlorophyll, which is almost chemically identical to haemoglobin. The only difference is that the central element in chlorophyll is magnesium and in hemoglobin it is iron. In wheat grass makes it more useful in various clinical conditions involving hemoglobin deficiency and other chronic disorders ultimately considered as green blood.",
"title": ""
},
{
"docid": "071ba3d1cec138011f398cae8589b77b",
"text": "The term ‘vulnerability’ is used in many different ways by various scholarly communities. The resulting disagreement about the appropriate definition of vulnerability is a frequent cause for misunderstanding in interdisciplinary research on climate change and a challenge for attempts to develop formal models of vulnerability. Earlier attempts at reconciling the various conceptualizations of vulnerability were, at best, partly successful. This paper presents a generally applicable conceptual framework of vulnerability that combines a nomenclature of vulnerable situations and a terminology of vulnerability concepts based on the distinction of four fundamental groups of vulnerability factors. This conceptual framework is applied to characterize the vulnerability concepts employed by the main schools of vulnerability research and to review earlier attempts at classifying vulnerability concepts. None of these onedimensional classification schemes reflects the diversity of vulnerability concepts identified in this review. The wide range of policy responses available to address the risks from global climate change suggests that climate impact, vulnerability, and adaptation assessments will continue to apply a variety of vulnerability concepts. The framework presented here provides the much-needed conceptual clarity and facilitates bridging the various approaches to researching vulnerability to climate change. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "48889a388562e195eff17488f57ca1e0",
"text": "To clarify the effects of changing shift schedules from a full-day to a half-day before a night shift, 12 single nurses and 18 married nurses with children that engaged in night shift work in a Japanese hospital were investigated. Subjects worked 2 different shift patterns consisting of a night shift after a half-day shift (HF-N) and a night shift after a day shift (D-N). Physical activity levels were recorded with a physical activity volume meter to measure sleep/wake time more precisely without restricting subjects' activities. The duration of sleep before a night shift of married nurses was significantly shorter than that of single nurses for both shift schedules. Changing shift from the D-N to the HF-N increased the duration of sleep before a night shift for both groups, and made wake-up time earlier for single nurses only. Repeated ANCOVA of the series of physical activities showed significant differences with shift (p < 0.01) and marriage (p < 0.01) for variances, and age (p < 0.05) for a covariance. The paired t-test to compare the effects of changing shift patterns in each subject group and ANCOVA for examining the hourly activity differences between single and married nurses showed that the effects of a change in shift schedules seemed to have less effect on married nurses than single nurses. These differences might due to the differences of their family/home responsibilities.",
"title": ""
},
{
"docid": "da8be182ac315342bead9df4b87c6bab",
"text": "A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography's and FPM's captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment.",
"title": ""
},
{
"docid": "073486fe6bcd756af5f5325b27c57912",
"text": "This paper describes the case of a unilateral agraphic patient (GG) who makes letter substitutions only when writing letters and words with his dominant left hand. Accuracy is significantly greater when he is writing with his right hand and when he is asked to spell words orally. GG also makes case errors when writing letters, and will sometimes write words in mixed case. However, these allograph errors occur regardless of which hand he is using to write. In terms of cognitive models of peripheral dysgraphia (e.g., Ellis, 1988), it appears that he has an allograph level impairment that affects writing with both hands, and a separate problem in accessing graphic motor patterns that disrupts writing with the left hand only. In previous studies of left-handed patients with unilateral agraphia (Zesiger & Mayer, 1992; Zesiger, Pegna, & Rilliet, 1994), it has been suggested that allographic knowledge used for writing with both hands is stored exclusively in the left hemisphere, but that graphic motor patterns are represented separately in each hemisphere. The pattern of performance demonstrated by GG strongly supports such a conclusion.",
"title": ""
},
{
"docid": "a318e8755d2f2ba3c84543ba853c34fc",
"text": "Multi-view learning can provide self-supervision when different views are avail1 able of the same data. Distributional hypothesis provides another form of useful 2 self-supervision from adjacent sentences which are plentiful in large unlabelled 3 corpora. Motivated by the asymmetry in the two hemispheres of the human brain 4 as well as the observation that different learning architectures tend to emphasise 5 different aspects of sentence meaning, we present two multi-view frameworks for 6 learning sentence representations in an unsupervised fashion. One framework uses 7 a generative objective and the other a discriminative one. In both frameworks, 8 the final representation is an ensemble of two views, in which, one view encodes 9 the input sentence with a Recurrent Neural Network (RNN), and the other view 10 encodes it with a simple linear model. We show that, after learning, the vectors 11 produced by our multi-view frameworks provide improved representations over 12 their single-view learned counterparts, and the combination of different views gives 13 representational improvement over each view and demonstrates solid transferability 14 on standard downstream tasks. 15",
"title": ""
},
{
"docid": "eb5c7c9fbe64cbfd4b6c7dd5490c17c1",
"text": "Android packing services provide significant benefits in code protection by hiding original executable code, which help app developers to protect their code against reverse engineering. However, adversaries take the advantage of packers to hide their malicious code. A number of unpacking approaches have been proposed to defend against malicious packed apps. Unfortunately, most of the unpacking approaches work only for a limited time or for a particular type of packers. The analysis for different packers often requires specific domain knowledge and a significant amount of manual effort. In this paper, we conducted analyses of known Android packers appeared in recent years and propose to design an automatic detection and classification framework. The framework is capable of identifying packed apps, extracting the execution behavioral pattern of packers, and categorizing packed apps into groups. The variants of packer families share typical behavioral patterns reflecting their activities and packing techniques. The behavioral patterns obtained dynamically can be exploited to detect and classify unknown packers, which shed light on new directions for security researchers.",
"title": ""
},
{
"docid": "71573bc8f5be1025837d5c72393b4fa6",
"text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.",
"title": ""
},
{
"docid": "e92fd3ce5f90600f2fca84682c35c4e3",
"text": "A software-defined radar is a versatile radar system, where most of the processing, like signal generation, filtering, up-and down conversion etc. is performed by a software. This paper presents a state of the art of software-defined radar technology. It describes the design concept of software-defined radars and the two possible implementations. A global assessment is presented, and the link with the Cognitive Radar is explained.",
"title": ""
},
{
"docid": "ca1c193e5e5af821772a5d123e84b72a",
"text": "Over the last few years, the phenomenon of adversarial examples — maliciously constructed inputs that fool trained machine learning models — has captured the attention of the research community, especially when the adversary is restricted to small modifications of a correctly handled input. Less surprisingly, image classifiers also lack human-level performance on randomly corrupted images, such as images with additive Gaussian noise. In this paper we provide both empirical and theoretical evidence that these are two manifestations of the same underlying phenomenon, establishing close connections between the adversarial robustness and corruption robustness research programs. This suggests that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions. Based on our results we recommend that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.",
"title": ""
},
{
"docid": "c6485365e8ce550ea8c507aa963a00c2",
"text": "Consensus molecular subtypes and the evolution of precision medicine in colorectal cancer Rodrigo Dienstmann, Louis Vermeulen, Justin Guinney, Scott Kopetz, Sabine Tejpar and Josep Tabernero Nature Reviews Cancer 17, 79–92 (2017) In this article a source of grant funding for one of the authors was omitted from the Acknowledgements section. The online version of the article has been corrected to include: “The work of R.D. was supported by the Grant for Oncology Innovation under the project ‘Next generation of clinical trials with matched targeted therapies in colorectal cancer’”. C O R R E C T I O N",
"title": ""
},
{
"docid": "1b923168160fcd643692d5473b828ce3",
"text": "Interactive Evolutionary Computation (IEC) creates the intriguing possibility that a large variety of useful content can be produced quickly and easily for practical computer graphics and gaming applications. To show that IEC can produce such content, this paper applies IEC to particle system effects, which are the de facto method in computer graphics for generating fire, smoke, explosions, electricity, water, and many other special effects. While particle systems are capable of producing a broad array of effects, they require substantial mathematical and programming knowledge to produce. Therefore, efficient particle system generation tools are required for content developers to produce special effects in a timely manner. This paper details the design, representation, and animation of particle systems via two IEC tools called NEAT Particles and NEAT Projectiles. Both tools evolve artificial neural networks (ANN) with the NeuroEvolution of Augmenting Topologies (NEAT) method to control the behavior of particles. NEAT Particles evolves general-purpose particle effects, whereas NEAT Projectiles specializes in evolving particle weapon effects for video games. The primary advantage of this NEAT-based IEC approach is to decouple the creation of new effects from mathematics and programming, enabling content developers without programming knowledge to produce complex effects. Furthermore, it allows content designers to produce a broader range of effects than typical development tools. Finally, it acts as a concept generator, allowing content creators to interactively and efficiently explore the space of possible effects. Both NEAT Particles and NEAT Projectiles demonstrate how IEC can evolve useful content for graphical media and games, and are together a step toward the larger goal of automated content generation.",
"title": ""
},
{
"docid": "d2ec8831779e7af4e82a10c617a2e9a1",
"text": "In the new designs of military aircraft and unmanned aircraft there is a clear trend towards increasing demand of electrical power. This fact is mainly due to the replacement of mechanical, pneumatic and hydraulic equipments by partially or completely electrical systems. Generally, use of electrical power onboard is continuously increasing within the areas of communications, surveillance and general systems, such as: radar, cooling, landing gear or actuators systems. To cope with this growing demand for electric power, new levels of voltage (270 VDC), architectures and power electronics devices are being applied to the onboard electrical power distribution systems. The purpose of this paper is to present and describe the technological project HV270DC. In this project, one Electrical Power Distribution System (EPDS), applicable to the more electric aircrafts, has been developed. This system has been integrated by EADS in order to study the benefits and possible problems or risks that affect this kind of power distribution systems, in comparison with conventional distribution systems.",
"title": ""
},
{
"docid": "a05b4878404f9127d576d90d6b241588",
"text": "This paper presents an air-filled substrate integrated waveguide (AFSIW) filter post-process tuning technique. The emerging high-performance AFSIW technology is of high interest for the design of microwave and millimeter-wave substrate integrated systems based on low-cost multilayer printed circuit board (PCB) process. However, to comply with stringent specifications, especially for space, aeronautical and safety applications, a filter post-process tuning technique is desired. AFSIW single pole filter post-process tuning using a capacitive post is theoretically analyzed. It is demonstrated that a tuning of more than 3% of the resonant frequency is achieved at 21 GHz using a 0.3 mm radius post with a 40% insertion ratio. For experimental demonstration, a fourth-order AFSIW band pass filter operating in the 20.88 to 21.11 GHz band is designed and fabricated. Due to fabrication tolerances, it is shown that its performances are not in line with expected results. Using capacitive post tuning, characteristics are improved and agree with optimized results. This post-process tuning can be used for other types of substrate integrated devices.",
"title": ""
},
{
"docid": "bf294a4c3af59162b2f401e2cdcb060b",
"text": "We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. Previous work on machine comprehension (e.g., semantic modeling) has made great strides, but primarily focuses either on limited-domain datasets, or on solving a more restricted goal (e.g., open-domain relation extraction). In contrast, MCTest requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Reading comprehension can test advanced abilities such as causal reasoning and understanding the world, yet, by being multiple-choice, still provide a clear metric. By being fictional, the answer typically can be found only in the story itself. The stories and questions are also carefully limited to those a young child would understand, reducing the world knowledge that is required for the task. We present the scalable crowd-sourcing methods that allow us to cheaply construct a dataset of 500 stories and 2000 questions. By screening workers (with grammar tests) and stories (with grading), we have ensured that the data is the same quality as another set that we manually edited, but at one tenth the editing cost. By being open-domain, yet carefully restricted, we hope MCTest will serve to encourage research and provide a clear metric for advancement on the machine comprehension of text. 1 Reading Comprehension A major goal for NLP is for machines to be able to understand text as well as people. Several research disciplines are focused on this problem: for example, information extraction, relation extraction, semantic role labeling, and recognizing textual entailment. Yet these techniques are necessarily evaluated individually, rather than by how much they advance us towards the end goal. On the other hand, the goal of semantic parsing is the machine comprehension of text (MCT), yet its evaluation requires adherence to a specific knowledge representation, and it is currently unclear what the best representation is, for open-domain text. We believe that it is useful to directly tackle the top-level task of MCT. For this, we need a way to measure progress. One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the passage is set. Therefore, we propose a multiple-choice reading comprehension task as a way to evaluate progress on MCT. We have built a reading comprehension dataset containing 500 fictional stories, with 4 multiple choice questions per story. It was built using methods which can easily scale to at least 5000 stories, since the stories were created, and the curation was done, using crowd sourcing almost entirely, at a total of $4.00 per story. We plan to periodically update the dataset to ensure that methods are not overfitting to the existing data. The dataset is open-domain, yet restricted to concepts and words that a 7 year old is expected to understand. This task is still beyond the capability of today’s computers and algorithms.",
"title": ""
}
] | scidocsrr |
5e7c2be0d66e726a1d4bd7d249df0187 | Psychopathic Personality: Bridging the Gap Between Scientific Evidence and Public Policy. | [
{
"docid": "32b5458ced294a01654f3747273db08d",
"text": "Prior studies of childhood aggression have demonstrated that, as a group, boys are more aggressive than girls. We hypothesized that this finding reflects a lack of research on forms of aggression that are relevant to young females rather than an actual gender difference in levels of overall aggressiveness. In the present study, a form of aggression hypothesized to be typical of girls, relational aggression, was assessed with a peer nomination instrument for a sample of 491 third-through sixth-grade children. Overt aggression (i.e., physical and verbal aggression as assessed in past research) and social-psychological adjustment were also assessed. Results provide evidence for the validity and distinctiveness of relational aggression. Further, they indicated that, as predicted, girls were significantly more relationally aggressive than were boys. Results also indicated that relationally aggressive children may be at risk for serious adjustment difficulties (e.g., they were significantly more rejected and reported significantly higher levels of loneliness, depression, and isolation relative to their nonrelationally aggressive peers).",
"title": ""
}
] | [
{
"docid": "d364aaa161cc92e28697988012c35c2a",
"text": "Many people believe that information that is stored in long-term memory is permanent, citing examples of \"retrieval techniques\" that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures, methods for eliciting spontaneous and other conscious recoveries, and—perhaps most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates. In this article we first evaluate • the evidence and conclude that, contrary to apparent popular belief, the evidence in no way confirms the view that all memories are permanent and thus potentially recoverable. We then describe some failures that resulted from attempts to elicit retrieval of previously stored information and conjecture what circumstances might cause information stored in memory to be irrevocably destroyed. Few would deny the existence of a phenomenon called \"forgetting,\" which is evident in the common observation that information becomes less available as the interval increases between the time of the information's initial acquisition and the time of its attempted retrieval. Despite the prevalence of the phenomenon, the factors that underlie forgetting have proved to be rather elusive, and the literature abounds with hypothesized mechanisms to account for the observed data. In this article we shall focus our attention on what is perhaps the fundamental issue concerning forgetting; Does forgetting consist of an actual loss of stored information, or does it result from a loss of access to information, which, once stored, remains forever? It should be noted at the outset that this question may be impossible to resolve in an absolute sense. Consider the following thought experiment. A person (call him Geoffrey) observes some event, say a traffic accident. During the period of observation, a movie camera strapped to Geoffrey's head records the event as Geoffrey experiences it. Some time later, Geoffrey attempts to recall and Vol. 35, No. S, 409-420 describe the event with the aid of some retrieval technique (e.g., hypnosis or brain stimulation), which is alleged to allow recovery of any information stored in his brain. While Geoffrey describes the event, a second person (Elizabeth) watches the movie that has been made of the event. Suppose, now, that Elizabeth is unable to decide whether Geoffrey is describing his memory or the movie—in other words, memory and movie are indistinguishable. Such a finding would constitute rather impressive support for the position held by many people that the mind registers an accurate representation of reality and that this information is stored permanently. But suppose, on the other hand, that Geoffrey's report—even with the aid of the miraculous retrieval technique—is incomplete, sketchy, and inaccurate, and furthermore, suppose that the accuracy of his report deteriorates over time. Such a finding, though consistent with the view that forgetting consists of information loss, would still be inconclusive, because it could be argued that the retrieval technique—no matter what it was— was simply not good enough to disgorge the information, which remained buried somewhere in the recesses of Geoffrey's brain. Thus, the question of information loss versus This article was written while E. Loftus was a fellow at the Center for Advanced Study in the Behavioral Sciences, Stanford, California, and G. Loftus was a visiting scholar in the Department of Psychology at Stanford University. James Fries generously picked apart an earlier version of this article. Paul Baltes translated the writings of Johann Nicolas Tetens (177?). The following financial sources are gratefully acknowledged: (a) National Science Foundation (NSF) Grant BNS 76-2337 to G. Loftus; (b) 'NSF Grant ENS 7726856 to E. Loftus; and (c) NSF Grant BNS 76-22943 and an Andrew Mellon Foundation grant to the Center for Advanced Study in the Behavioral Sciences. Requests for reprints should be sent to Elizabeth Loftus, Department of Psychology, University of Washington, Seattle, Washington 98195. AMERICAN PSYCHOLOGIST • MAY 1980 * 409 Copyright 1980 by the American Psychological Association, Inc. 0003-066X/80/3505-0409$00.75 retrieval failure may be unanswerable in principle. Nonetheless it often becomes necessary to choose sides. In the scientific arena, for example, a theorist constructing a model of memory may— depending on the details of the model'—be forced to adopt one position or the other. In fact, several leading theorists have suggested that although loss from short-term memory does occur, once material is registered in long-term memory, the information is never lost from the system, although it may normally be inaccessible (Shiffrin & Atkinson, 1969; Tulving, 1974). The idea is not new, however. Two hundred years earlier, the German philosopher Johann Nicolas Tetens (1777) wrote: \"Each idea does not only leave a trace or a consequent of that trace somewhere in the body, but each of them can be stimulated—-even if it is not possible to demonstrate this in a given situation\" (p, 7S1). He was explicit about his belief that certain ideas may seem to be forgotten, but that actually they are only enveloped by other ideas and, in truth, are \"always with us\" (p, 733). Apart from theoretical interest, the position one takes on the permanence of memory traces has important practical consequences. It therefore makes sense to air the issue from time to time, which is what we shall do here, The purpose of this paper is threefold. We shall first report some data bearing on people's beliefs about the question of information loss versus retrieval failure. To anticipate our findings, our survey revealed that a substantial number of the individuals queried take the position that stored information is permanent'—-or in other words, that all forgetting results from retrieval failure. In support of their answers, people typically cited data from some variant of the thought experiment described above, that is, they described currently available retrieval techniques that are alleged to uncover previously forgotten information. Such techniques include hypnosis, psychoanalytic procedures (e.g., free association), and— most important—the electrical stimulation of the brain reported by Wilder Penfield and his associates (Penfield, 1969; Penfield & Perot, 1963; Penfield & Roberts, 1959). The results of our survey lead to the second purpose of this paper, which is to evaluate this evidence. Finally, we shall describe some interesting failures that have resulted from attempts to elicit retrieval of previously stored information. These failures lend support to the contrary view that some memories are apparently modifiable, and that consequently they are probably unrecoverable. Beliefs About Memory In an informal survey, 169 individuals from various parts of the U.S. were asked to give their views about how memory works. Of these, 75 had formal graduate training in psychology, while the remaining 94 did not. The nonpsychologists had varied occupations. For example, lawyers, secretaries, taxicab drivers, physicians, philosophers, fire investigators, and even an 11-year-old child participated. They were given this question: Which of these statements best reflects your view on how human memory works? 1. Everything we learn is permanently stored in the mind, although sometimes particular details are not accessible. With hypnosis, or other special techniques, these inaccessible details could eventually be recovered. 2. Some details that we learn may be permanently lost from memory. Such details would never be» able to be recovered by hypnosis, or any other special technique, because these details are simply no longer there. Please elaborate briefly or give any reasons you may have for your view. We found that 84% of the psychologists chose Position 1, that is, they indicated a belief that all information in long-term memory is there, even though much of it cannot be retrieved; 14% chose Position 2, and 2% gave some other answer. A somewhat smaller percentage, 69%, of the nonpsychologists indicated a belief in Position 1; 23% chose Position 2, while 8% did not make a clear choice. What reasons did people give for their belief? The most common reason for choosing Position 1 was based on personal experience and involved the occasional recovery of an idea that the person had not thought about for quite some time. For example, one person wrote: \"I've experienced and heard too many descriptions of spontaneous recoveries of ostensibly quite trivial memories, which seem to have been triggered by just the right set of a person's experiences.\" A second reason for a belief in Position 1, commonly given by persons trained in psychology, was knowledge of the work of Wilder Penfield. One psychologist wrote: \"Even though Statement 1 is untestable, I think that evidence, weak though it is, such as Penfield's work, strongly suggests it may be correct.\" Occasionally respondents offered a comment about 410 • MAY 1980 • AMERICAN PSYCHOLOGIST hypnosis, and more rarely about psychoanalysis and repression, sodium pentothal, or even reincarnation, to support their belief in the permanence of memory. Admittedly, the survey was informally conducted, the respondents were not selected randomly, and the question itself may have pressured people to take sides when their true belief may have been a position in between. Nevertheless, the results suggest a widespread belief in the permanence of memories and give us some idea of the reasons people offer in support of this belief.",
"title": ""
},
{
"docid": "702df543119d648be859233bfa2b5d03",
"text": "We review more than 200 applications of neural networks in image processing and discuss the present and possible future role of neural networks, especially feed-forward neural networks, Kohonen feature maps and Hop1eld neural networks. The various applications are categorised into a novel two-dimensional taxonomy for image processing algorithms. One dimension speci1es the type of task performed by the algorithm: preprocessing, data reduction=feature extraction, segmentation, object recognition, image understanding and optimisation. The other dimension captures the abstraction level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level,ion level of the input data processed by the algorithm: pixel-level, local feature-level, structure-level, object-level, object-set-level and scene characterisation. Each of the six types of tasks poses speci1c constraints to a neural-based approach. These speci1c conditions are discussed in detail. A synthesis is made of unresolved problems related to the application of pattern recognition techniques in image processing and speci1cally to the application of neural networks. Finally, we present an outlook into the future application of neural networks and relate them to novel developments. ? 2002 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ca807d3bed994a8e7492898e6bfe6dd2",
"text": "This paper proposes state-of-charge (SOC) and remaining charge estimation algorithm of each cell in series-connected lithium-ion batteries. SOC and remaining charge information are indicators for diagnosing cell-to-cell variation; thus, the proposed algorithm can be applied to SOC- or charge-based balancing in cell balancing controller. Compared to voltage-based balancing, SOC and remaining charge information improve the performance of balancing circuit but increase computational complexity which is a stumbling block in implementation. In this work, a simple current sensor-less SOC estimation algorithm with estimated current equalizer is used to achieve aforementioned object. To check the characteristics and validate the feasibility of the proposed method, a constant current discharging/charging profile is applied to a series-connected battery pack (twelve 2.6Ah Li-ion batteries). The experimental results show its applicability to SOC- and remaining charge-based balancing controller with high estimation accuracy.",
"title": ""
},
{
"docid": "1bf43801d05551f376464d08893b211c",
"text": "A Large number of digital text information is generated every day. Effectively searching, managing and exploring the text data has become a main task. In this paper, we first represent an introduction to text mining and a probabilistic topic model Latent Dirichlet allocation. Then two experiments are proposed Wikipedia articles and users’ tweets topic modelling. The former one builds up a document topic model, aiming to a topic perspective solution on searching, exploring and recommending articles. The latter one sets up a user topic model, providing a full research and analysis over Twitter users’ interest. The experiment process including data collecting, data pre-processing and model training is fully documented and commented. Further more, the conclusion and application of this paper could be a useful computation tool for social and business research.",
"title": ""
},
{
"docid": "e85e8b54351247d5f20bf1756a133a08",
"text": "In high speed ADC, comparator influences the overall performance of ADC directly. This paper describes a very high speed and high resolution preamplifier comparator. The comparator use a self biased differential amp to increase the output current sinking and sourcing capability. The threshold and width of the new comparator can be reduced to the millivolt (mV) range, the resolution and the dynamic characteristics are good. Based on UMC 0. 18um CMOS process model, simulated results show the comparator can work under a 25dB gain, 55MHz speed and 210. 10μW power .",
"title": ""
},
{
"docid": "7e38ba11e394acd7d5f62d6a11253075",
"text": "The body-schema concept is revisited in the context of embodied cognition, further developing the theory formulated by Marc Jeannerod that the motor system is part of a simulation network related to action, whose function is not only to shape the motor system for preparing an action (either overt or covert) but also to provide the self with information on the feasibility and the meaning of potential actions. The proposed computational formulation is based on a dynamical system approach, which is linked to an extension of the equilibrium-point hypothesis, called Passive Motor Paradigm: this dynamical system generates goal-oriented, spatio-temporal, sensorimotor patterns, integrating a direct and inverse internal model in a multi-referential framework. The purpose of such computational model is to operate at the same time as a general synergy formation machinery for planning whole-body actions in humanoid robots and/or for predicting coordinated sensory-motor patterns in human movements. In order to illustrate the computational approach, the integration of simultaneous, even partially conflicting tasks will be analyzed in some detail with regard to postural-focal dynamics, which can be defined as the fusion of a focal task, namely reaching a target with the whole-body, and a postural task, namely maintaining overall stability.",
"title": ""
},
{
"docid": "b5cc41f689a1792b544ac66a82152993",
"text": "0020-7225/$ see front matter 2009 Elsevier Ltd doi:10.1016/j.ijengsci.2009.08.001 * Corresponding author. Tel.: +66 2 9869009x220 E-mail address: [email protected] (T. Leephakp Nowadays, Pneumatic Artificial Muscle (PAM) has become one of the most widely-used fluid-power actuators which yields remarkable muscle-like properties such as high force to weight ratio, soft and flexible structure, minimal compressed-air consumption and low cost. To obtain optimum design and usage, it is necessary to understand mechanical behaviors of the PAM. In this study, the proposed models are experimentally derived to describe mechanical behaviors of the PAMs. The experimental results show a non-linear relationship between contraction as well as air pressure within the PAMs and a pulling force of the PAMs. Three different sizes of PAMs available in industry are studied for empirical modeling and simulation. The case studies are presented to verify close agreement on the simulated results to the experimental results when the PAMs perform under various loads. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "174fb8b7cb0f45bed49a50ce5ad19c88",
"text": "De-noising and extraction of the weak signature are crucial to fault prognostics in which case features are often very weak and masked by noise. The wavelet transform has been widely used in signal de-noising due to its extraordinary time-frequency representation capability. In this paper, the performance of wavelet decomposition-based de-noising and wavelet filter-based de-noising methods are compared based on signals from mechanical defects. The comparison result reveals that wavelet filter is more suitable and reliable to detect a weak signature of mechanical impulse-like defect signals, whereas the wavelet decomposition de-noising method can achieve satisfactory results on smooth signal detection. In order to select optimal parameters for the wavelet filter, a two-step optimization process is proposed. Minimal Shannon entropy is used to optimize the Morlet wavelet shape factor. A periodicity detection method based on singular value decomposition (SVD) is used to choose the appropriate scale for the wavelet transform. The signal de-noising results from both simulated signals and experimental data are presented and both support the proposed method. r 2005 Elsevier Ltd. All rights reserved. see front matter r 2005 Elsevier Ltd. All rights reserved. jsv.2005.03.007 ding author. Tel.: +1 414 229 3106; fax: +1 414 229 3107. resses: [email protected] (H. Qiu), [email protected] (J. Lee), [email protected] (J. Lin).",
"title": ""
},
{
"docid": "63f20dd528d54066ed0f189e4c435fe7",
"text": "In many specific laboratories the students use only a PLC simulator software, because the hardware equipment is expensive. This paper presents a solution that allows students to study both the hardware and software parts, in the laboratory works. The hardware part of solution consists in an old plotter, an adapter board, a PLC and a HMI. The software part of this solution is represented by the projects of the students, in which they developed applications for programming the PLC and the HMI. This equipment can be made very easy and can be used in university labs by students, so that they design and test their applications, from low to high complexity [1], [2].",
"title": ""
},
{
"docid": "363a465d626fec38555563722ae92bb1",
"text": "A novel reverse-conducting insulated-gate bipolar transistor (RC-IGBT) featuring an oxide trench placed between the n-collector and the p-collector and a floating p-region (p-float) sandwiched between the n-drift and n-collector is proposed. First, the new structure introduces a high-resistance collector short resistor at low current density, which leads to the suppression of the snapback effect. Second, the collector short resistance can be adjusted by varying the p-float length without increasing the collector cell length. Third, the p-float layer also acts as the base of the n-collector/p-float/n-drift transistor which can be activated and offers a low-resistance current path at high current densities, which contributes to the low on-state voltage of the integrated freewheeling diode and the fast turnoff. As simulations show, the proposed RC-IGBT shows snapback-free output characteristics and faster turnoff compared with the conventional RC-IGBT.",
"title": ""
},
{
"docid": "3dfb419706ae85d232753a085dc145f7",
"text": "This chapter describes the different steps of designing, building, simulating, and testing an intelligent flight control module for an increasingly popular unmanned aerial vehicle (UAV), known as a quadrotor. It presents an in-depth view of the modeling of the kinematics, dynamics, and control of such an interesting UAV. A quadrotor offers a challenging control problem due to its highly unstable nature. An effective control methodology is therefore needed for such a unique airborne vehicle. The chapter starts with a brief overview on the quadrotor's background and its applications, in light of its advantages. Comparisons with other UAVs are made to emphasize the versatile capabilities of this special design. For a better understanding of the vehicle's behavior, the quadrotor's kinematics and dynamics are then detailed. This yields the equations of motion, which are used later as a guideline for developing the proposed intelligent flight control scheme. In this chapter, fuzzy logic is adopted for building the flight controller of the quadrotor. It has been witnessed that fuzzy logic control offers several advantages over certain types of conventional control methods, specifically in dealing with highly nonlinear systems and modeling uncertainties. Two types of fuzzy inference engines are employed in the design of the flight controller, each of which is explained and evaluated. For testing the designed intelligent flight controller, a simulation environment was first developed. The simulations were made as realistic as possible by incorporating environmental disturbances such as wind gust and the ever-present sensor noise. The proposed controller was then tested on a real test-bed built specifically for this project. Both the simulator and the real quadrotor were later used for conducting different attitude stabilization experiments to evaluate the performance of the proposed control strategy. The controller's performance was also benchmarked against conventional control techniques such as input-output linearization, backstepping and sliding mode control strategies. Conclusions were then drawn based on the conducted experiments and their results.",
"title": ""
},
{
"docid": "50906e5d648b7598c307b09975daf2d8",
"text": "Digitization forces industries to adapt to changing market conditions and consumer behavior. Exponential advances in technology, increased consumer power and sharpened competition imply that companies are facing the menace of commoditization. To sustainably succeed in the market, obsolete business models have to be adapted and new business models can be developed. Differentiation and unique selling propositions through innovation as well as holistic stakeholder engagement help companies to master the transformation. To enable companies and start-ups facing the implications of digital change, a tool was created and designed specifically for this demand: the Business Model Builder. This paper investigates the process of transforming the Business Model Builder into a software-supported digitized version. The digital twin allows companies to simulate the iterative adjustment of business models to constantly changing market conditions as well as customer needs on an ongoing basis. The user can modify individual variables, understand interdependencies and see the impact on the result of the business case, i.e. earnings before interest and taxes (EBIT) or economic value added (EVA). The simulation of a business models accordingly provides the opportunity to generate a dynamic view of the business model where any changes of input variables are considered in the result, the business case. Thus, functionality, feasibility and profitability of a business model can be reviewed, tested and validated in the digital simulation tool.",
"title": ""
},
{
"docid": "48eacd86c14439454525e5a570db083d",
"text": "RATIONALE, AIMS AND OBJECTIVES\nTotal quality in coagulation testing is a necessary requisite to achieve clinically reliable results. Evidence was provided that poor standardization in the extra-analytical phases of the testing process has the greatest influence on test results, though little information is available so far on prevalence and type of pre-analytical variability in coagulation testing.\n\n\nMETHODS\nThe present study was designed to describe all pre-analytical problems on inpatients routine and stat samples recorded in our coagulation laboratory over a 2-year period and clustered according to their source (hospital departments).\n\n\nRESULTS\nOverall, pre-analytic problems were identified in 5.5% of the specimens. Although the highest frequency was observed for paediatric departments, in no case was the comparison of the prevalence among the different hospital departments statistically significant. The more frequent problems could be referred to samples not received in the laboratory following a doctor's order (49.3%), haemolysis (19.5%), clotting (14.2%) and inappropriate volume (13.7%). Specimens not received prevailed in the intensive care unit, surgical and clinical departments, whereas clotted and haemolysed specimens were those most frequently recorded from paediatric and emergency departments, respectively. The present investigation demonstrates a high prevalence of pre-analytical problems affecting samples for coagulation testing.\n\n\nCONCLUSIONS\nFull implementation of a total quality system, encompassing a systematic error tracking system, is a valuable tool to achieve meaningful information on the local pre-analytic processes most susceptible to errors, enabling considerations on specific responsibilities and providing the ideal basis for an efficient feedback within the hospital departments.",
"title": ""
},
{
"docid": "3f6cbad208a819fc8fc6a46208197d59",
"text": "The use of visemes as atomic speech units in visual speech analysis and synthesis systems is well-established. Viseme labels are determined using a many-to-one phoneme-to-viseme mapping. However, due to the visual coarticulation effects, an accurate mapping from phonemes to visemes should define a many-to-many mapping scheme. In this research it was found that neither the use of standardized nor speaker-dependent many-to-one viseme labels could satisfy the quality requirements of concatenative visual speech synthesis. Therefore, a novel technique to define a many-to-many phoneme-to-viseme mapping scheme is introduced, which makes use of both treebased and k-means clustering approaches. We show that these many-to-many viseme labels more accurately describe the visual speech information as compared to both phoneme-based and many-toone viseme-based speech labels. In addition, we found that the use of these many-to-many visemes improves the precision of the segment selection phase in concatenative visual speech synthesis using limited speech databases. Furthermore, the resulting synthetic visual speech was both objectively and subjectively found to be of higher quality when the many-to-many visemes are used to describe the speech database as well as the synthesis targets.",
"title": ""
},
{
"docid": "1afdefb31d7b780bb78b59ca8b0d3d8a",
"text": "Convolutional Neural Network (CNN) is a very powerful approach to extract discriminative local descriptors for effective image search. Recent work adopts fine-tuned strategies to further improve the discriminative power of the descriptors. Taking a different approach, in this paper, we propose a novel framework to achieve competitive retrieval performance. Firstly, we propose various masking schemes, namely SIFT-mask, SUM-mask, and MAX-mask, to select a representative subset of local convolutional features and remove a large number of redundant features. We demonstrate that this can effectively address the burstiness issue and improve retrieval accuracy. Secondly, we propose to employ recent embedding and aggregating methods to further enhance feature discriminability. Extensive experiments demonstrate that our proposed framework achieves state-of-the-art retrieval accuracy.",
"title": ""
},
{
"docid": "07348109c7838032850c039f9a463943",
"text": "Ceramics are widely used biomaterials in prosthetic dentistry due to their attractive clinical properties. They are aesthetically pleasing with their color, shade and luster, and they are chemically stable. The main constituents of dental ceramic are Si-based inorganic materials, such as feldspar, quartz, and silica. Traditional feldspar-based ceramics are also referred to as “Porcelain”. The crucial difference between a regular ceramic and a dental ceramic is the proportion of feldspar, quartz, and silica contained in the ceramic. A dental ceramic is a multiphase system, i.e. it contains a dispersed crystalline phase surrounded by a continuous amorphous phase (a glassy phase). Modern dental ceramics contain a higher proportion of the crystalline phase that significantly improves the biomechanical properties of ceramics. Examples of these high crystalline ceramics include lithium disilicate and zirconia.",
"title": ""
},
{
"docid": "affa48f455d5949564302b4c23324458",
"text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.",
"title": ""
},
{
"docid": "2795c78d2e81a064173f49887c9b1bb1",
"text": "This paper reports a continuously tunable lumped bandpass filter implemented in a third-order coupled resonator configuration. The filter is fabricated on a Borosilicate glass substrate using a surface micromachining technology that offers hightunable passive components. Continuous electrostatic tuning is achieved using three tunable capacitor banks, each consisting of one continuously tunable capacitor and three switched capacitors with pull-in voltage of less than 40 V. The center frequency of the filter is tuned from 1 GHz down to 600 MHz while maintaining a 3-dB bandwidth of 13%-14% and insertion loss of less than 4 dB. The maximum group delay is less than 10 ns across the entire tuning range. The temperature stability of the center frequency from -50°C to 50°C is better than 2%. The measured tuning speed of the filter is better than 80 s, and the is better than 20 dBm, which are in good agreement with simulations. The filter occupies a small size of less than 1.5 cm × 1.1 cm. The implemented filter shows the highest performance amongst the fully integrated microelectromechanical systems filters operating at sub-gigahertz range.",
"title": ""
},
{
"docid": "fd7c514e8681a5292bcbf2bbf6e75664",
"text": "In modern days, a large no of automobile accidents are caused due to driver fatigue. To address the problem we propose a vision-based real-time driver fatigue detection system based on eye-tracking, which is an active safety system. Eye tracking is one of the key technologies, for, future driver assistance systems since human eyes contain much information about the driver's condition such as gaze, attention level, and fatigue level. Face and eyes of the driver are first localized and then marked in every frame obtained from the video source. The eyes are tracked in real time using correlation function with an automatically generated online template. Additionally, driver’s distraction and conversations with passengers during driving can lead to serious results. A real-time vision-based model for monitoring driver’s unsafe states, including fatigue state is proposed. A time-based eye glance to mitigate driver distraction is proposed. Keywords— Driver fatigue, Eye-Tracking, Template matching,",
"title": ""
}
] | scidocsrr |
5dfc521aa0b4e8ca3fe63d828d91068d | Parallel Concatenated Trellis Coded Modulation1 | [
{
"docid": "5ef37c0620e087d3552499e2b9b4fc84",
"text": "A parallel concatenated coding scheme consists of two simple constituent systematic encoders linked by an interleaver. The input bits to the first encoder are scrambled by the interleaver before entering the second encoder. The codeword of the parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. This construction can be generalized to any number of constituent codes. Parallel concatenated schemes employing two convolutional codes as constituent codes, in connection with an iterative decoding algorithm of complexity comparable to that of the constituent codes, have been recently shown to yield remarkable coding gains close to theoretical limits. They have been named, and are known as, “turbo codes.” We propose a method to evaluate an upper bound to the bit error probability of a parallel concatenated coding scheme averaged over all interleavers of a given length. The analytical bounding technique is then used to shed some light on some crucial questions which have been floating around in the communications community since the proposal of turbo codes.",
"title": ""
}
] | [
{
"docid": "889c8754c97db758b474a6f140b39911",
"text": "Herbal toothpaste Salvadora with comprehensive effective materials for dental health ranging from antibacterial, detergent and whitening properties including benzyl isothiocyanate, alkaloids, and anions such as thiocyanate, sulfate, and nitrate with potential antibacterial feature against oral microbial flora, silica and chloride for oral disinfection and bleaching the tooth, fluoride to strengthen tooth enamel, and saponin with appropriate detergent, and resin which protects tooth enamel by placing on it and is aggregated in Salvadora has been formulated. The paste is also from other herbs extract including valerian and chamomile. Current toothpaste has antibacterial, anti-plaque, anti-tartar and whitening, and wood extract of the toothbrush strengthens the tooth and enamel, and prevents the cancellation of enamel.From the other side, resin present in toothbrush wood creates a proper covering on tooth enamel and protects it against decay and benzyl isothiocyanate and also alkaloids present in miswak wood gives Salvadora toothpaste considerable antibacterial and bactericidal effects. Anti-inflammatory effects of the toothpaste are for apigenin and alpha bisabolol available in chamomile extract and seskuiterpen components including valeric acid with sedating features give the paste sedating and calming effect to oral tissues.",
"title": ""
},
{
"docid": "0aab0c0fa6a1b0f283478b390dece614",
"text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.",
"title": ""
},
{
"docid": "8a564e77710c118e4de86be643b061a6",
"text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.",
"title": ""
},
{
"docid": "f6669d0b53dd0ca789219874d35bf14e",
"text": "Saliva in the mouth is a biofluid produced mainly by three pairs of major salivary glands--the submandibular, parotid and sublingual glands--along with secretions from many minor submucosal salivary glands. Salivary gland secretion is a nerve-mediated reflex and the volume of saliva secreted is dependent on the intensity and type of taste and on chemosensory, masticatory or tactile stimulation. Long periods of low (resting or unstimulated) flow are broken by short periods of high flow, which is stimulated by taste and mastication. The nerve-mediated salivary reflex is modulated by nerve signals from other centers in the central nervous system, which is most obvious as hyposalivation at times of anxiety. An example of other neurohormonal influences on the salivary reflex is the circadian rhythm, which affects salivary flow and ionic composition. Cholinergic parasympathetic and adrenergic sympathetic autonomic nerves evoke salivary secretion, signaling through muscarinic M3 and adrenoceptors on salivary acinar cells and leading to secretion of fluid and salivary proteins. Saliva gland acinar cells are chloride and sodium secreting, and the isotonic fluid produced is rendered hypotonic by salivary gland duct cells as it flows to the mouth. The major proteins present in saliva are secreted by salivary glands, creating viscoelasticity and enabling the coating of oral surfaces with saliva. Salivary films are essential for maintaining oral health and regulating the oral microbiome. Saliva in the mouth contains a range of validated and potential disease biomarkers derived from epithelial cells, neutrophils, the microbiome, gingival crevicular fluid and serum. For example, cortisol levels are used in the assessment of stress, matrix metalloproteinases-8 and -9 appear to be promising markers of caries and periodontal disease, and a panel of mRNA and proteins has been proposed as a marker of oral squamous cell carcinoma. Understanding the mechanisms by which components enter saliva is an important aspect of validating their use as biomarkers of health and disease.",
"title": ""
},
{
"docid": "4030f6e47e7e1519f69ec9335f4f7cf6",
"text": "In this work, we study the problem of scheduling parallelizable jobs online with an objective of minimizing average flow time. Each parallel job is modeled as a DAG where each node is a sequential task and each edge represents dependence between tasks. Previous work has focused on a model of parallelizability known as the arbitrary speed-up curves setting where a scalable algorithm is known. However, the DAG model is more widely used by practitioners, since many jobs generated from parallel programming languages and libraries can be represented in this model. However, little is known for this model in the online setting with multiple jobs. The DAG model and the speed-up curve models are incomparable and algorithmic results from one do not immediately imply results for the other. Previous work has left open the question of whether an online algorithm can be O(1)-competitive with O(1)-speed for average flow time in the DAG setting. In this work, we answer this question positively by giving a scalable algorithm which is (1 + ǫ)-speed O( 1 ǫ )-competitive for any ǫ > 0. We further introduce the first greedy algorithm for scheduling parallelizable jobs — our algorithm is a generalization of the shortest jobs first algorithm. Greedy algorithms are among the most useful in practice due to their simplicity. We show that this algorithm is (2 + ǫ)-speed O( 1 ǫ )competitive for any ǫ > 0. ∗Department of Computer Science and Engineering, Washington University in St. Louis, 1 Brookings Drive, St. Louis, MO 63130. {kunal, li.jing, kefulu, bmoseley}@wustl.edu. B. Moseley and K. Lu work was supported in part by a Google Research Award and a Yahoo Research Award. K. Agrawal and J. Li were supported in part by NSF grants CCF-1150036 and CCF-1340571.",
"title": ""
},
{
"docid": "13748d365584ef2e680affb67cfcc882",
"text": "In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback.",
"title": ""
},
{
"docid": "a40fab738589a9efbf3f87b6c7668601",
"text": "AUTOSAR supports the re-use of software and hardware components of automotive electronic systems. Therefore, amongst other things, AUTOSAR defines a software architecture that is used to decouple software components from hardware devices. This paper gives an overview about the different layers of that architecture. In addition, the upper most layer that concerns the application specific part of automotive electronic systems is presented.",
"title": ""
},
{
"docid": "c7a32821699ebafadb4c59e99fb3aa9e",
"text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.",
"title": ""
},
{
"docid": "9841b00b0fe5b9c7112a2e98553b61b0",
"text": "The market of converters connected to transmission lines continues to require insulated gate bipolar transistors (IGBTs) with higher blocking voltages to reduce the number of IGBTs connected in series in high-voltage converters. To cope with these demands, semiconductor manufactures have developed several technologies. Nowadays, IGBTs up to 6.5-kV blocking voltage and IEGTs up to 4.5-kV blocking voltage are on the market. However, these IGBTs and injection-enhanced gate transistors (IEGTs) still have very high switching losses compared to low-voltage devices, leading to a realistic switching frequency of up to 1 kHz. To reduce switching losses in high-power applications, the auxiliary resonant commutated pole inverter (ARCPI) is a possible alternative. In this paper, switching losses and on-state voltages of NPT-IGBT (3.3 kV-1200 A), FS-IGBT (6.5 kV-600 A), SPT-IGBT (2.5 kV-1200 A, 3.3 kV-1200 A and 6.5 kV-600 A) and IEGT (3.3 kV-1200 A) are measured under hard-switching and zero-voltage switching (ZVS) conditions. The aim of this selection is to evaluate the impact of ZVS on various devices of the same voltage ranges. In addition, the difference in ZVS effects among the devices with various blocking voltage levels is evaluated.",
"title": ""
},
{
"docid": "be96da6d7a1e8348366b497f160c674e",
"text": "The large availability of biomedical data brings opportunities and challenges to health care. Representation of medical concepts has been well studied in many applications, such as medical informatics, cohort selection, risk prediction, and health care quality measurement. In this paper, we propose an efficient multichannel convolutional neural network (CNN) model based on multi-granularity embeddings of medical concepts named MG-CNN, to examine the effect of individual patient characteristics including demographic factors and medical comorbidities on total hospital costs and length of stay (LOS) by using the Hospital Quality Monitoring System (HQMS) data. The proposed embedding method leverages prior medical hierarchical ontology and improves the quality of embedding for rare medical concepts. The embedded vectors are further visualized by the t-Distributed Stochastic Neighbor Embedding (t-SNE) technique to demonstrate the effectiveness of grouping related medical concepts. Experimental results demonstrate that our MG-CNN model outperforms traditional regression methods based on the one-hot representation of medical concepts, especially in the outcome prediction tasks for patients with low-frequency medical events. In summary, MG-CNN model is capable of mining potential knowledge from the clinical data and will be broadly applicable in medical research and inform clinical decisions.",
"title": ""
},
{
"docid": "7442f94af36f6d317291da814e7f3676",
"text": "Muscles are required to perform or absorb mechanical work under different conditions. However the ability of a muscle to do this depends on the interaction between its contractile components and its elastic components. In the present study we have used ultrasound to examine the length changes of the gastrocnemius medialis muscle fascicle along with those of the elastic Achilles tendon during locomotion under different incline conditions. Six male participants walked (at 5 km h(-1)) on a treadmill at grades of -10%, 0% and 10% and ran (at 10 km h(-1)) at grades of 0% and 10%, whilst simultaneous ultrasound, electromyography and kinematics were recorded. In both walking and running, force was developed isometrically; however, increases in incline increased the muscle fascicle length at which force was developed. Force was developed at shorter muscle lengths for running when compared to walking. Substantial levels of Achilles tendon strain were recorded in both walking and running conditions, which allowed the muscle fascicles to act at speeds more favourable for power production. In all conditions, positive work was performed by the muscle. The measurements suggest that there is very little change in the function of the muscle fascicles at different slopes or speeds, despite changes in the required external work. This may be a consequence of the role of this biarticular muscle or of the load sharing between the other muscles of the triceps surae.",
"title": ""
},
{
"docid": "33126812301dfc04b475ecbc9c8ae422",
"text": "From fishtail to princess braids, these intricately woven structures define an important and popular class of hairstyle, frequently used for digital characters in computer graphics. In addition to the challenges created by the infinite range of styles, existing modeling and capture techniques are particularly constrained by the geometric and topological complexities. We propose a data-driven method to automatically reconstruct braided hairstyles from input data obtained from a single consumer RGB-D camera. Our approach covers the large variation of repetitive braid structures using a family of compact procedural braid models. From these models, we produce a database of braid patches and use a robust random sampling approach for data fitting. We then recover the input braid structures using a multi-label optimization algorithm and synthesize the intertwining hair strands of the braids. We demonstrate that a minimal capture equipment is sufficient to effectively capture a wide range of complex braids with distinct shapes and structures.",
"title": ""
},
{
"docid": "6cf048863ed227ea7d2188ec6b8ee107",
"text": "Lane keeping is an important feature for self-driving cars. This paper presents an end-to-end learning approach to obtain the proper steering angle to maintain the car in the lane. The convolutional neural network (CNN) model takes raw image frames as input and outputs the steering angles accordingly. The model is trained and evaluated using the comma.ai dataset, which contains the front view image frames and the steering angle data captured when driving on the road. Unlike the traditional approach that manually decomposes the autonomous driving problem into technical components such as lane detection, path planning and steering control, the end-to-end model can directly steer the vehicle from the front view camera data after training. It learns how to keep in lane from human driving data. Further discussion of this end-to-end approach and its limitation are also provided.",
"title": ""
},
{
"docid": "333645d1c405ae51aafe2b236c8fa3fd",
"text": "Proposes a new method of personal recognition based on footprints. In this method, an input pair of raw footprints is normalized, both in direction and in position for robustness image-matching between the input pair of footprints and the pair of registered footprints. In addition to the Euclidean distance between them, the geometric information of the input footprint is used prior to the normalization, i.e., directional and positional information. In the experiment, the pressure distribution of the footprint was measured with a pressure-sensing mat. Ten volunteers contributed footprints for testing the proposed method. The recognition rate was 30.45% without any normalization (i.e., raw image), and 85.00% with the authors' method.",
"title": ""
},
{
"docid": "c117bb1f7a25c44cbd0d75b7376022f6",
"text": "Data noise is present in many machine learning problems domains, some of these are well studied but others have received less attention. In this paper we propose an algorithm for constructing a kernel Fisher discriminant (KFD) from training examples withnoisy labels. The approach allows to associate with each example a probability of the label being flipped. We utilise an expectation maximization (EM) algorithm for updating the probabilities. The E-step uses class conditional probabilities estimated as a by-product of the KFD algorithm. The M-step updates the flip probabilities and determines the parameters of the discriminant. We demonstrate the feasibility of the approach on two real-world data-sets.",
"title": ""
},
{
"docid": "f97086d856ebb2f1c5e4167f725b5890",
"text": "In this paper, an ac-linked hybrid electrical energy system comprising of photo voltaic (PV) and fuel cell (FC) with electrolyzer for standalone applications is proposed. PV is the primary power source of the system, and an FC-electrolyzer combination is used as a backup and as long-term storage system. A Fuzzy Logic controller is developed for the maximum power point tracking for the PV system. A simple power management strategy is designed for the proposed system to manage power flows among the different energy sources. A simulation model for the hybrid energy has been developed using MATLAB/Simulink.",
"title": ""
},
{
"docid": "1bfab561c8391dad6f0493fa7614feba",
"text": "Submission instructions: You should submit your answers via GradeScope and your code via Snap submission site. Submitting answers: Prepare answers to your homework into a single PDF file and submit it via http://gradescope.com. Make sure that answer to each question is on a separate page. This means you should submit a 14-page PDF (1 page for the cover sheet, 4 pages for the answers to question 1, 3 pages for answers to question 2, and 6 pages for question 3). On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. Put all the code for a single question into a single file and upload it. Questions We strongly encourage you to use Snap.py for Python. However, you can use any other graph analysis tool or package you want (SNAP for C++, NetworkX for Python, JUNG for Java, etc.). A question that occupied sociologists and economists as early as the 1900's is how do innovations (e.g. ideas, products, technologies, behaviors) diffuse (spread) within a society. One of the prominent researchers in the field is Professor Mark Granovetter who among other contributions introduced along with Thomas Schelling threshold models in sociology. In Granovetter's model, there is a population of individuals (mob) and for simplicity two behaviours (riot or not riot). • Threshold model: each individual i has a threshold t i that determines her behavior in the following way. If there are at least t i individuals that are rioting, then she will join the riot, otherwise she stays inactive. Here, it is implicitly assumed that each individual has full knowledge of the behavior of all other individuals in the group. Nodes with small threshold are called innovators (early adopters) and nodes with large threshold are called laggards (late adopters). Granovetter's threshold model has been successful in explain classical empirical adoption curves by relating them to thresholds in",
"title": ""
},
{
"docid": "5e8fbfec1ff5bf432dbaadaf13c9ca75",
"text": "Multiple studies have illustrated the potential for dramatic societal, environmental and economic benefits from significant penetration of autonomous driving. However, all the current approaches to autonomous driving require the automotive manufacturers to shoulder the primary responsibility and liability associated with replacing human perception and decision making with automation, potentially slowing the penetration of autonomous vehicles, and consequently slowing the realization of the societal benefits of autonomous vehicles. We propose here a new approach to autonomous driving that will re-balance the responsibility and liabilities associated with autonomous driving between traditional automotive manufacturers, private infrastructure players, and third-party players. Our proposed distributed intelligence architecture leverages the significant advancements in connectivity and edge computing in the recent decades to partition the driving functions between the vehicle, edge computers on the road side, and specialized third-party computers that reside in the vehicle. Infrastructure becomes a critical enabler for autonomy. With this Infrastructure Enabled Autonomy (IEA) concept, the traditional automotive manufacturers will only need to shoulder responsibility and liability comparable to what they already do today, and the infrastructure and third-party players will share the added responsibility and liabilities associated with autonomous functionalities. We propose a Bayesian Network Model based framework for assessing the risk benefits of such a distributed intelligence architecture. An additional benefit of the proposed architecture is that it enables “autonomy as a service” while still allowing for private ownership of automobiles.",
"title": ""
},
{
"docid": "648cc09e715d3a5bdc84a908f96c95d2",
"text": "With the advent of battery-powered portable devices and the mandatory adoptions of power factor correction (PFC), non-inverting buck-boost converter is attracting numerous attentions. Conventional two-switch or four-switch non-inverting buck-boost converters choose their operation modes by measuring input and output voltage magnitudes. This can cause higher output voltage transients when input and output are close to each other. For the mode selection, the comparison of input and output voltage magnitudes is not enough due to the voltage drops raised by the parasitic components. In addition, the difference in the minimum and maximum effective duty cycle between controller output and switching device yields the discontinuity at the instant of mode change. Moreover, the different properties of output voltage versus a given duty cycle of buck and boost operating modes contribute to the output voltage transients. In this paper, the effect of the discontinuity due to the effective duty cycle derived from device switching time at the mode change is analyzed. A technique to compensate the output voltage transient due to this discontinuity is proposed. In order to attain additional mitigation of output transients and linear input/output voltage characteristic in buck and boost modes, the linearization of DC-gain of large signal model in boost operation is analyzed as well. Analytical, simulation, and experimental results are presented to validate the proposed theory.",
"title": ""
},
{
"docid": "a45dbfbea6ff33d920781c07dac0442b",
"text": "Context-aware intelligent systems employ implicit inputs, and make decisions based on complex rules and machine learning models that are rarely clear to users. Such lack of system intelligibility can lead to loss of user trust, satisfaction and acceptance of these systems. However, automatically providing explanations about a system's decision process can help mitigate this problem. In this paper we present results from a controlled study with over 200 participants in which the effectiveness of different types of explanations was examined. Participants were shown examples of a system's operation along with various automatically generated explanations, and then tested on their understanding of the system. We show, for example, that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust. Explanations describing why the system did not behave a certain way, resulted in lower understanding yet adequate performance. We discuss implications for the use of our findings in real-world context-aware applications.",
"title": ""
}
] | scidocsrr |
3d29e2996f9e625152fa1ec7e456a8e4 | A literature survey on Facial Expression Recognition using Global Features | [
{
"docid": "d537214f407128585d6a4e6bab55a45b",
"text": "It is well known that how to extract dynamical features is a key issue for video based face analysis. In this paper, we present a novel approach of facial action units (AU) and expression recognition based on coded dynamical features. In order to capture the dynamical characteristics of facial events, we design the dynamical haar-like features to represent the temporal variations of facial events. Inspired by the binary pattern coding, we further encode the dynamic haar-like features into binary pattern features, which are useful to construct weak classifiers for boosting learning. Finally the Adaboost is performed to learn a set of discriminating coded dynamic features for facial active units and expression recognition. Experiments on the CMU expression database and our own facial AU database show its encouraging performance.",
"title": ""
}
] | [
{
"docid": "add26519d60ec2a972ad550cd79129d6",
"text": "The hybrid runtime (HRT) model offers a plausible path towards high performance and efficiency. By integrating the OS kernel, parallel runtime, and application, an HRT allows the runtime developer to leverage the full privileged feature set of the hardware and specialize OS services to the runtime's needs. However, conforming to the HRT model currently requires a complete port of the runtime and application to the kernel level, for example to our Nautilus kernel framework, and this requires knowledge of kernel internals. In response, we developed Multiverse, a system that bridges the gap between a built-from-scratch HRT and a legacy runtime system. Multiverse allows existing, unmodified applications and runtimes to be brought into the HRT model without any porting effort whatsoever. Developers simply recompile their package with our compiler toolchain, and Multiverse automatically splits the execution of the application between the domains of a legacy OS and an HRT environment. To the user, the package appears to run as usual on Linux, but the bulk of it now runs as a kernel. The developer can then incrementally extend the runtime and application to take advantage of the HRT model. We describe the design and implementation of Multiverse, and illustrate its capabilities using the Racket runtime system.",
"title": ""
},
{
"docid": "441633276271b94dc1bd3e5e28a1014d",
"text": "While a large number of consumers in the US and Europe frequently shop on the Internet, research on what drives consumers to shop online has typically been fragmented. This paper therefore proposes a framework to increase researchers’ understanding of consumers’ attitudes toward online shopping and their intention to shop on the Internet. The framework uses the constructs of the Technology Acceptance Model (TAM) as a basis, extended by exogenous factors and applies it to the online shopping context. The review shows that attitudes toward online shopping and intention to shop online are not only affected by ease of use, usefulness, and enjoyment, but also by exogenous factors like consumer traits, situational factors, product characteristics, previous online shopping experiences, and trust in online shopping.",
"title": ""
},
{
"docid": "e6a60fab31af5985520cc64b93b5deb0",
"text": "BACKGROUND\nGenital warts may mimic a variety of conditions, thus complicating their diagnosis and treatment. The recognition of early flat lesions presents a diagnostic challenge.\n\n\nOBJECTIVE\nWe sought to describe the dermatoscopic features of genital warts, unveiling the possibility of their diagnosis by dermatoscopy.\n\n\nMETHODS\nDermatoscopic patterns of 61 genital warts from 48 consecutively enrolled male patients were identified with their frequencies being used as main outcome measures.\n\n\nRESULTS\nThe lesions were examined dermatoscopically and further classified according to their dermatoscopic pattern. The most frequent finding was an unspecific pattern, which was found in 15/61 (24.6%) lesions; a fingerlike pattern was observed in 7 (11.5%), a mosaic pattern in 6 (9.8%), and a knoblike pattern in 3 (4.9%) cases. In almost half of the lesions, pattern combinations were seen, of which a fingerlike/knoblike pattern was the most common, observed in 11/61 (18.0%) cases. Among the vascular features, glomerular, hairpin/dotted, and glomerular/dotted vessels were the most frequent finding seen in 22 (36.0%), 15 (24.6%), and 10 (16.4%) of the 61 cases, respectively. In 10 (16.4%) lesions no vessels were detected. Hairpin vessels were more often seen in fingerlike (χ(2) = 39.31, P = .000) and glomerular/dotted vessels in knoblike/mosaic (χ(2) = 9.97, P = .008) pattern zones; vessels were frequently missing in unspecified (χ(2) = 8.54, P = .014) areas.\n\n\nLIMITATIONS\nOnly male patients were examined.\n\n\nCONCLUSIONS\nThere is a correlation between dermatoscopic patterns and vascular features reflecting the life stages of genital warts; dermatoscopy may be useful in the diagnosis of early-stage lesions.",
"title": ""
},
{
"docid": "f193816262da8f4edb523e172a83f953",
"text": "The European FF POIROT project (IST-2001-38248) aims at developing applications for tackling financial fraud, using formal ontological repositories as well as multilingual terminological resources. In this article, we want to focus on the development cycle towards an application recognizing several types of e-mail fraud, such as phishing, Nigerian advance fee fraud and lottery scam. The development cycle covers four tracks of development - language engineering, terminology engineering, knowledge engineering and system engineering. These development tracks are preceded by a problem determination phase and followed by a deployment phase. Each development track is supported by a methodology. All methodologies and phases in the development cycle will be discussed in detail",
"title": ""
},
{
"docid": "f5f70dca677752bcaa39db59988c088e",
"text": "To examine how inclusive our schools are after 25 years of educational reform, students with disabilities and their parents were asked to identify current barriers and provide suggestions for removing those barriers. Based on a series of focus group meetings, 15 students with mobility limitations (9-15 years) and 12 parents identified four categories of barriers at their schools: (a) the physical environment (e.g., narrow doorways, ramps); (b) intentional attitudinal barriers (e.g., isolation, bullying); (c) unintentional attitudinal barriers (e.g., lack of knowledge, understanding, or awareness); and (d) physical limitations (e.g., difficulty with manual dexterity). Recommendations for promoting accessibility and full participation are provided and discussed in relation to inclusive education efforts. Exceptional Children",
"title": ""
},
{
"docid": "eaeccd0d398e0985e293d680d2265528",
"text": "Deep networks have been successfully applied to visual tracking by learning a generic representation offline from numerous training images. However the offline training is time-consuming and the learned generic representation may be less discriminative for tracking specific objects. In this paper we present that, even without learning, simple convolutional networks can be powerful enough to develop a robust representation for visual tracking. In the first frame, we randomly extract a set of normalized patches from the target region as filters, which define a set of feature maps in the subsequent frames. These maps measure similarities between each filter and the useful local intensity patterns across the target, thereby encoding its local structural information. Furthermore, all the maps form together a global representation, which maintains the relative geometric positions of the local intensity patterns, and hence the inner geometric layout of the target is also well preserved. A simple and effective online strategy is adopted to update the representation, allowing it to robustly adapt to target appearance variations. Our convolution networks have surprisingly lightweight structure, yet perform favorably against several state-of-the-art methods on a large benchmark dataset with 50 challenging videos.",
"title": ""
},
{
"docid": "2a45f4ed21d9534a937129532cb32020",
"text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.",
"title": ""
},
{
"docid": "2e864dcde57ea1716847f47977af0140",
"text": "I focus on the role of case studies in developing causal explanations. I distinguish between the theoretical purposes of case studies and the case selection strategies or research designs used to advance those objectives. I construct a typology of case studies based on their purposes: idiographic (inductive and theory-guided), hypothesis-generating, hypothesis-testing, and plausibility probe case studies. I then examine different case study research designs, including comparable cases, most and least likely cases, deviant cases, and process tracing, with attention to their different purposes and logics of inference. I address the issue of selection bias and the “single logic” debate, and I emphasize the utility of multi-method research.",
"title": ""
},
{
"docid": "885b3a5b386e642dc567c9b7944112d5",
"text": "Derived from the field of art curation, digital provenance is an unforgeable record of a digital object’s chain of successive custody and sequence of operations performed on it. Digital provenance forms an immutable directed acyclic graph (DAG) structure. Recent works in digital provenance have focused on provenance generation, storage and management frameworks in different fields. In this paper, we address two important aspects of digital provenance that have not been investigated thoroughly in existing works: 1) capturing the DAG structure of provenance and 2) supporting dynamic information sharing. We propose a scheme that uses signature-based mutual agreements between successive users to clearly delineate the transition of responsibility of the document as it is passed along the chain of users. In addition to preserving the properties of confidentiality, immutability and availability for a digital provenance chain, it supports the representation of DAG structures of provenance. Our scheme supports dynamic information sharing scenarios where the sequence of users who have custody of the document is not predetermined. Security analysis and empirical results indicate that our scheme improves the security of the existing Onion and PKLC provenance schemes with comparable performance. Keywords—Provenance, cryptography, signatures, integrity, confidentiality, availability",
"title": ""
},
{
"docid": "9bbc3e426c7602afaa857db85e754229",
"text": "Knowledge bases of real-world facts about entities and their relationships are useful resources for a variety of natural language processing tasks. However, because knowledge bases are typically incomplete, it is useful to be able to perform link prediction, i.e., predict whether a relationship not in the knowledge base is likely to be true. This paper combines insights from several previous link prediction models into a new embedding model STransE that represents each entity as a lowdimensional vector, and each relation by two matrices and a translation vector. STransE is a simple combination of the SE and TransE models, but it obtains better link prediction performance on two benchmark datasets than previous embedding models. Thus, STransE can serve as a new baseline for the more complex models in the link prediction task.",
"title": ""
},
{
"docid": "c3ca913fa81b2e79a2fff6d7a5e2fea7",
"text": "We present Query-Regression Network (QRN), a variant of Recurrent Neural Network (RNN) that is suitable for end-to-end machine comprehension. While previous work [18, 22] largely relied on external memory and global softmax attention mechanism, QRN is a single recurrent unit with internal memory and local sigmoid attention. Unlike most RNN-based models, QRN is able to effectively handle long-term dependencies and is highly parallelizable. In our experiments we show that QRN obtains the state-of-the-art result in end-to-end bAbI QA tasks [21].",
"title": ""
},
{
"docid": "3c27b3e11ba9924e9c102fc9ba7907b6",
"text": "The Visagraph IITM Eye Movement Recording System is an instrument that assesses reading eye movement efficiency and related parameters objectively. It also incorporates automated data analysis. In the standard protocol, the patient reads selections only at the level of their current school grade, or at the level that has been determined by a standardized reading test. In either case, deficient reading eye movements may be the consequence of a language-based reading disability, an oculomotor-based reading inefficiency, or both. We propose an addition to the standard protocol: the patient’s eye movements are recorded a second time with text that is significantly below the grade level of the initial reading. The goal is to determine which factor is primarily contributing to the patient’s reading problem, oculomotor or language. This concept is discussed in the context of two representative cases.",
"title": ""
},
{
"docid": "272d83db41293889d9ca790717983193",
"text": "The ability to measure the level of customer satisfaction with online shopping is essential in gauging the success and failure of e-commerce. To do so, Internet businesses must be able to determine and understand the values of their existing and potential customers. Hence, it is important for IS researchers to develop and validate a diverse array of metrics to comprehensively capture the attitudes and feelings of online customers. What factors make online shopping appealing to customers? What customer values take priority over others? This study’s purpose is to answer these questions, examining the role of several technology, shopping, and product factors on online customer satisfaction. This is done using a conjoint analysis of consumer preferences based on data collected from 188 young consumers. Results indicate that the three most important attributes to consumers for online satisfaction are privacy (technology factor), merchandising (product factor), and convenience (shopping factor). These are followed by trust, delivery, usability, product customization, product quality, and security. Implications of these findings are discussed and suggestions for future research are provided.",
"title": ""
},
{
"docid": "6de2b5fa5c8d3db9f9d599b6ebb56782",
"text": "Extreme sensitivity of soil organic carbon (SOC) to climate and land use change warrants further research in different terrestrial ecosystems. The aim of this study was to investigate the link between aggregate and SOC dynamics in a chronosequence of three different land uses of a south Chilean Andisol: a second growth Nothofagus obliqua forest (SGFOR), a grassland (GRASS) and a Pinus radiataplantation (PINUS). Total carbon content of the 0–10 cm soil layer was higher for GRASS (6.7 kg C m −2) than for PINUS (4.3 kg C m−2), while TC content of SGFOR (5.8 kg C m−2) was not significantly different from either one. High extractable oxalate and pyrophosphate Al concentrations (varying from 20.3–24.4 g kg −1, and 3.9– 11.1 g kg−1, respectively) were found in all sites. In this study, SOC and aggregate dynamics were studied using size and density fractionation experiments of the SOC, δ13C and total carbon analysis of the different SOC fractions, and C mineralization experiments. The results showed that electrostatic sorption between and among amorphous Al components and clay minerals is mainly responsible for the formation of metal-humus-clay complexes and the stabilization of soil aggregates. The process of ligand exchange between SOC and Al would be of minor importance resulting in the absence of aggregate hierarchy in this soil type. Whole soil C mineralization rate constants were highest for SGFOR and PINUS, followed by GRASS (respectively 0.495, 0.266 and 0.196 g CO 2-C m−2 d−1 for the top soil layer). In contrast, incubation experiments of isolated macro organic matter fractions gave opposite results, showing that the recalcitrance of the SOC decreased in another order: PINUS>SGFOR>GRASS. We deduced that electrostatic sorption processes and physical protection of SOC in soil aggregates were the main processes determining SOC stabilization. As a result, high aggregate carbon concentraCorrespondence to: D. Huygens ([email protected]) tions, varying from 148 till 48 g kg −1, were encountered for all land use sites. Al availability and electrostatic charges are dependent on pH, resulting in an important influence of soil pH on aggregate stability. Recalcitrance of the SOC did not appear to largely affect SOC stabilization. Statistical correlations between extractable amorphous Al contents, aggregate stability and C mineralization rate constants were encountered, supporting this hypothesis. Land use changes affected SOC dynamics and aggregate stability by modifying soil pH (and thus electrostatic charges and available Al content), root SOC input and management practices (such as ploughing and accompanying drying of the soil).",
"title": ""
},
{
"docid": "f26df52af74f9c2f51ff0e56daeb4c38",
"text": "Browsing is part of the information seeking process, used when information needs are ill-defined or unspecific. Browsing and searching are often interleaved during information seeking to accommodate changing awareness of information needs. Digital Libraries often support full-text search, but are not so helpful in supporting browsing. Described here is a novel browsing system created for the Greenstone software used by the New Zealand Digital Library that supports users in a more natural approach to the information seeking process.",
"title": ""
},
{
"docid": "ad14a9f120aedc84abc99f1715e6769b",
"text": "We introduce an interactive tool which enables a user to quickly assemble an architectural model directly over a 3D point cloud acquired from large-scale scanning of an urban scene. The user loosely defines and manipulates simple building blocks, which we call SmartBoxes, over the point samples. These boxes quickly snap to their proper locations to conform to common architectural structures. The key idea is that the building blocks are smart in the sense that their locations and sizes are automatically adjusted on-the-fly to fit well to the point data, while at the same time respecting contextual relations with nearby similar blocks. SmartBoxes are assembled through a discrete optimization to balance between two snapping forces defined respectively by a data-fitting term and a contextual term, which together assist the user in reconstructing the architectural model from a sparse and noisy point cloud. We show that a combination of the user's interactive guidance and high-level knowledge about the semantics of the underlying model, together with the snapping forces, allows the reconstruction of structures which are partially or even completely missing from the input.",
"title": ""
},
{
"docid": "3ca7c89e12c81ac90d5d12d6f9a2b7f2",
"text": "Texture classification is one of the problems which has been paid much attention on by computer scientists since late 90s. If texture classification is done correctly and accurately, it can be used in many cases such as Pattern recognition, object tracking, and shape recognition. So far, there have been so many methods offered to solve this problem. Near all these methods have tried to extract and define features to separate different labels of textures really well. This article has offered an approach which has an overall process on the images of textures based on Local binary pattern and Gray Level Co-occurrence matrix and then by edge detection, and finally, extracting the statistical features from the images would classify them. Although, this approach is a general one and is could be used in different applications, the method has been tested on the stone texture and the results have been compared with some of the previous approaches to prove the quality of proposed approach. Keywords-Texture Classification, Gray level Co occurrence, Local Binary Pattern, Statistical Features",
"title": ""
},
{
"docid": "eb7990a677cd3f96a439af6620331400",
"text": "Solving the visual symbol grounding problem has long been a goal of artificial intelligence. The field appears to be advancing closer to this goal with recent breakthroughs in deep learning for natural language grounding in static images. In this paper, we propose to translate videos directly to sentences using a unified deep neural network with both convolutional and recurrent structure. Described video datasets are scarce, and most existing methods have been applied to toy domains with a small vocabulary of possible words. By transferring knowledge from 1.2M+ images with category labels and 100,000+ images with captions, our method is able to create sentence descriptions of open-domain videos with large vocabularies. We compare our approach with recent work using language generation metrics, subject, verb, and object prediction accuracy, and a human evaluation.",
"title": ""
},
{
"docid": "e20d26ce3dea369ae6817139ff243355",
"text": "This article explores the roots of white support for capital punishment in the United States. Our analysis addresses individual-level and contextual factors, paying particular attention to how racial attitudes and racial composition influence white support for capital punishment. Our findings suggest that white support hinges on a range of attitudes wider than prior research has indicated, including social and governmental trust and individualist and authoritarian values. Extending individual-level analyses, we also find that white responses to capital punishment are sensitive to local context. Perhaps most important, our results clarify the impact of race in two ways. First, racial prejudice emerges here as a comparatively strong predictor of white support for the death penalty. Second, black residential proximity functions to polarize white opinion along lines of racial attitude. As the black percentage of county residents rises, so too does the impact of racial prejudice on white support for capital punishment.",
"title": ""
},
{
"docid": "3196c06c66b49c052d07ced0de683d02",
"text": "Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from examplebased specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquerbased deductive search paradigm that inductively reduces the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these PBE components leverage both symbolic reasoning and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This can not only lead to better heuristics, but can also enable easier development, maintenance, and even personalization of a PBE system.",
"title": ""
}
] | scidocsrr |
f279df399f50407436670d9821df0891 | Training with Exploration Improves a Greedy Stack LSTM Parser | [
{
"docid": "b5f7511566b902bc206228dc3214c211",
"text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.",
"title": ""
}
] | [
{
"docid": "73270e8140d763510d97f7bd2fdd969e",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "a0db56f55e2d291cb7cf871c064cf693",
"text": "It's being very important to listen to social media streams whether it's Twitter, Facebook, Messenger, LinkedIn, email or even company own application. As many customers may be using this streams to reach out to company because they need help. The company have setup social marketing team to monitor this stream. But due to huge volumes of users it's very difficult to analyses each and every social message and take a relevant action to solve users grievances, which lead to many unsatisfied customers or may even lose a customer. This papers proposes a system architecture which will try to overcome the above shortcoming by analyzing messages of each ejabberd users to check whether it's actionable or not. If it's actionable then an automated Chatbot will initiates conversation with that user and help the user to resolve the issue by providing a human way interactions using LUIS and cognitive services. To provide a highly robust, scalable and extensible architecture, this system is implemented on AWS public cloud.",
"title": ""
},
{
"docid": "fe0120f7d74ad63dbee9c3cd5ff81e6f",
"text": "Background: Software fault prediction is the process of developing models that can be used by the software practitioners in the early phases of software development life cycle for detecting faulty constructs such as modules or classes. There are various machine learning techniques used in the past for predicting faults. Method: In this study we perform a systematic review studies from January 1991 to October 2013 in the literature that use the machine learning techniques for software fault prediction. We assess the performance capability of the machine learning techniques in existing research for software fault prediction. We also compare the performance of the machine learning techniques with the",
"title": ""
},
{
"docid": "4e8040c9336cf7d847d938b905f8f81d",
"text": "Many cluster management systems (CMSs) have been proposed to share a single cluster with multiple distributed computing systems. However, none of the existing approaches can handle distributed machine learning (ML) workloads given the following criteria: high resource utilization, fair resource allocation and low sharing overhead. To solve this problem, we propose a new CMS named Dorm, incorporating a dynamically-partitioned cluster management mechanism and an utilization-fairness optimizer. Specifically, Dorm uses the container-based virtualization technique to partition a cluster, runs one application per partition, and can dynamically resize each partition at application runtime for resource efficiency and fairness. Each application directly launches its tasks on the assigned partition without petitioning for resources frequently, so Dorm imposes flat sharing overhead. Extensive performance evaluations showed that Dorm could simultaneously increase the resource utilization by a factor of up to 2.32, reduce the fairness loss by a factor of up to 1.52, and speed up popular distributed ML applications by a factor of up to 2.72, compared to existing approaches. Dorm's sharing overhead is less than 5% in most cases.",
"title": ""
},
{
"docid": "f5a934dc200b27747d3452f5a14c24e5",
"text": "Psoriasis vulgaris is a common and often chronic inflammatory skin disease. The incidence of psoriasis in Western industrialized countries ranges from 1.5% to 2%. Patients afflicted with severe psoriasis vulgaris may experience a significant reduction in quality of life. Despite the large variety of treatment options available, surveys have shown that patients still do not received optimal treatments. To optimize the treatment of psoriasis in Germany, the Deutsche Dermatologi sche Gesellschaft (DDG) and the Berufsverband Deutscher Dermatologen (BVDD) have initiated a project to develop evidence-based guidelines for the management of psoriasis. They were first published in 2006 and updated in 2011. The Guidelines focus on induction therapy in cases of mild, moderate and severe plaque-type psoriasis in adults including systemic therapy, UV therapy and topical therapies. The therapeutic recommendations were developed based on the results of a systematic literature search and were finalized during a consensus meeting using structured consensus methods (nominal group process).",
"title": ""
},
{
"docid": "da986950f6bbad36de5e9cc55d04e798",
"text": "Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.",
"title": ""
},
{
"docid": "d1f02e2f57cffbc17387de37506fddc9",
"text": "The task of matching patterns in graph-structured data has applications in such diverse areas as computer vision, biology, electronics, computer aided design, social networks, and intelligence analysis. Consequently, work on graph-based pattern matching spans a wide range of research communities. Due to variations in graph characteristics and application requirements, graph matching is not a single problem, but a set of related problems. This paper presents a survey of existing work on graph matching, describing variations among problems, general and specific solution approaches, evaluation techniques, and directions for further research. An emphasis is given to techniques that apply to general graphs with semantic characteristics.",
"title": ""
},
{
"docid": "b0b2c4c321b5607cd6ebda817258921d",
"text": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets.",
"title": ""
},
{
"docid": "0f9ef379901c686df08dd0d1bb187e22",
"text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.",
"title": ""
},
{
"docid": "1348ee3316643f4269311b602b71d499",
"text": "This paper describes our proposed solution for SemEval 2017 Task 1: Semantic Textual Similarity (Daniel Cer and Specia, 2017). The task aims at measuring the degree of equivalence between sentences given in English. Performance is evaluated by computing Pearson Correlation scores between the predicted scores and human judgements. Our proposed system consists of two subsystems and one regression model for predicting STS scores. The two subsystems are designed to learn Paraphrase and Event Embeddings that can take the consideration of paraphrasing characteristics and sentence structures into our system. The regression model associates these embeddings to make the final predictions. The experimental result shows that our system acquires 0.8 of Pearson Correlation Scores in this task.",
"title": ""
},
{
"docid": "49717f07b8b4a3da892c1bb899f7a464",
"text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.",
"title": ""
},
{
"docid": "6421979368a138e4b21ab7d9602325ff",
"text": "In recent years, despite several risk management models proposed by different researchers, software projects still have a high degree of failures. Improper risk assessment during software development was the major reason behind these unsuccessful projects as risk analysis was done on overall projects. This work attempts in identifying key risk factors and risk types for each of the development phases of SDLC, which would help in identifying the risks at a much early stage of development.",
"title": ""
},
{
"docid": "d76b7b25bce29cdac24015f8fa8ee5bb",
"text": "A circularly polarized magnetoelectric dipole antenna with high efficiency based on printed ridge gap waveguide is presented. The antenna gain is improved by using a wideband lens in front of the antennas. The lens consists of three layers dual-polarized mu-near zero (MNZ) inclusions. Each layer consists of a <inline-formula> <tex-math notation=\"LaTeX\">$3\\times4$ </tex-math></inline-formula> MNZ unit cell. The measured results indicate that the magnitude of <inline-formula> <tex-math notation=\"LaTeX\">$S_{11}$ </tex-math></inline-formula> is below −10 dB in the frequency range of 29.5–37 GHz. The resulting 3-dB axial ratio is over a frequency range of 32.5–35 GHz. The measured realized gain of the antenna is more than 10 dBi over a frequency band of 31–35 GHz achieving a radiation efficiency of 94% at 34 GHz.",
"title": ""
},
{
"docid": "3fa30df910c964bb2bf27a885aa59495",
"text": "In an Intelligent Environment, he user and the environment work together in a unique manner; the user expresses what he wishes to do, and the environment recognizes his intentions and helps out however it can. If well-implemented, such an environment allows the user to interact with it in the manner that is most natural for him personally. He should need virtually no time to learn to use it and should be more productive once he has. But to implement a useful and natural Intelligent Environment, he designers are faced with a daunting task: they must design a software system that senses what its users do, understands their intentions, and then responds appropriately. In this paper we argue that, in order to function reasonably in any of these ways, an Intelligent Environment must make use of declarative representations of what the user might do. We present our evidence in the context of the Intelligent Classroom, a facility that aids a speaker in this way and uses its understanding to produce a video of his presentation.",
"title": ""
},
{
"docid": "5b07bc318cb0f5dd7424cdcc59290d31",
"text": "The current practice used in the design of physical interactive products (such as handheld devices), often suffers from a divide between exploration of form and exploration of interactivity. This can be attributed, in part, to the fact that working prototypes are typically expensive, take a long time to manufacture, and require specialized skills and tools not commonly available in design studios.We have designed a prototyping tool that, we believe, can significantly reduce this divide. The tool allows designers to rapidly create functioning, interactive, physical prototypes early in the design process using a collection of wireless input components (buttons, sliders, etc.) and a sketch of form. The input components communicate with Macromedia Director to enable interactivity.We believe that this tool can improve the design practice by: a) Improving the designer's ability to explore both the form and interactivity of the product early in the design process, b) Improving the designer's ability to detect problems that emerge from the combination of the form and the interactivity, c) Improving users' ability to communicate their ideas, needs, frustrations and desires, and d) Improving the client's understanding of the proposed design, resulting in greater involvement and support for the design.",
"title": ""
},
{
"docid": "ae3d959972d673d24e6d0b7a0567323e",
"text": "Traditional data on influenza vaccination has several limitations: high cost, limited coverage of underrepresented groups, and low sensitivity to emerging public health issues. Social media, such as Twitter, provide an alternative way to understand a population’s vaccination-related opinions and behaviors. In this study, we build and employ several natural language classifiers to examine and analyze behavioral patterns regarding influenza vaccination in Twitter across three dimensions: temporality (by week and month), geography (by US region), and demography (by gender). Our best results are highly correlated official government data, with a correlation over 0.90, providing validation of our approach. We then suggest a number of directions for future work.",
"title": ""
},
{
"docid": "ff4c069ab63ced5979cf6718eec30654",
"text": "Dowser is a ‘guided’ fuzzer that combines taint tracking, program analysis and symbolic execution to find buffer overflow and underflow vulnerabilities buried deep in a program’s logic. The key idea is that analysis of a program lets us pinpoint the right areas in the program code to probe and the appropriate inputs to do so. Intuitively, for typical buffer overflows, we need consider only the code that accesses an array in a loop, rather than all possible instructions in the program. After finding all such candidate sets of instructions, we rank them according to an estimation of how likely they are to contain interesting vulnerabilities. We then subject the most promising sets to further testing. Specifically, we first use taint analysis to determine which input bytes influence the array index and then execute the program symbolically, making only this set of inputs symbolic. By constantly steering the symbolic execution along branch outcomes most likely to lead to overflows, we were able to detect deep bugs in real programs (like the nginx webserver, the inspircd IRC server, and the ffmpeg videoplayer). Two of the bugs we found were previously undocumented buffer overflows in ffmpeg and the poppler PDF rendering library.",
"title": ""
},
{
"docid": "21925b0a193ebb3df25c676d8683d895",
"text": "The use of dialogue systems in vehicles raises the problem of making sure that the dialogue does not distract the driver from the primary task of driving. Earlier studies have indicated that humans are very apt at adapting the dialogue to the traffic situation and the cognitive load of the driver. The goal of this paper is to investigate strategies for interrupting and resuming in, as well as changing topic domain of, spoken human-human in-vehicle dialogue. The results show a large variety of strategies being used, and indicate that the choice of resumption and domain-switching strategy depends partly on the topic domain being resumed, and partly on the role of the speaker (driver or passenger). These results will be used as a basis for the development of dialogue strategies for interruption, resumption and domain-switching in the DICO in-vehicle dialogue system.",
"title": ""
},
{
"docid": "58f1ba92eb199f4d105bf262b30dbbc5",
"text": "Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top–down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.",
"title": ""
},
{
"docid": "bbf987eef74d76cf2916ae3080a2b174",
"text": "The facial system plays an important role in human-robot interaction. EveR-4 H33 is a head system for an android face controlled by thirty-three motors. It consists of three layers: a mechanical layer, an inner cover layer and an outer cover layer. Motors are attached under the skin and some motors are correlated with each other. Some expressions cannot be shown by moving just one motor. In addition, moving just one motor can cause damage to other motors or the skin. To solve these problems, a facial muscle control method that controls motors in a correlated manner is required. We designed a facial muscle control method and applied it to EveR-4 H33. We develop the actress robot EveR-4A by applying the EveR-4 H33 to the 24 degrees of freedom upper body and mannequin legs. EveR-4A shows various facial expressions with lip synchronization using our facial muscle control method.",
"title": ""
}
] | scidocsrr |
641cb2cdc570ee6410bc86e68ecb1800 | PGX.D: a fast distributed graph processing engine | [
{
"docid": "e92ab865f33c7548c21ba99785912d03",
"text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.",
"title": ""
},
{
"docid": "88862d86e43d491ec4368410a61c13fb",
"text": "With the proliferation of large, irregular, and sparse relational datasets, new storage and analysis platforms have arisen to fill gaps in performance and capability left by conventional approaches built on traditional database technologies and query languages. Many of these platforms apply graph structures and analysis techniques to enable users to ingest, update, query, and compute on the topological structure of the network represented as sets of edges relating sets of vertices. To store and process Facebook-scale datasets, software and algorithms must be able to support data sources with billions of edges, update rates of millions of updates per second, and complex analysis kernels. These platforms must provide intuitive interfaces that enable graph experts and novice programmers to write implementations of common graph algorithms. In this paper, we conduct a qualitative study and a performance comparison of 12 open source graph databases using four fundamental graph algorithms on networks containing up to 256 million edges.",
"title": ""
}
] | [
{
"docid": "5f30867cb3071efa8fb0d34447b8a8f6",
"text": "Money laundering is a global problem that affects all countries to various degrees. Although, many countries take benefits from money laundering, by accepting the money from laundering but keeping the crime abroad, at the long run, “money laundering attracts crime”. Criminals come to know a country, create networks and eventually also locate their criminal activities there. Most financial institutions have been implementing antimoney laundering solutions (AML) to fight investment fraud. The key pillar of a strong Anti-Money Laundering system for any financial institution depends mainly on a well-designed and effective monitoring system. The main purpose of the Anti-Money Laundering transactions monitoring system is to identify potential suspicious behaviors embedded in legitimate transactions. This paper presents a monitor framework that uses various techniques to enhance the monitoring capabilities. This framework is depending on rule base monitoring, behavior detection monitoring, cluster monitoring and link analysis based monitoring. The monitor detection processes are based on a money laundering deterministic finite automaton that has been obtained from their corresponding regular expressions. Index Terms – Anti Money Laundering system, Money laundering monitoring and detecting, Cycle detection monitoring, Suspected Link monitoring.",
"title": ""
},
{
"docid": "9a7e491e4d4490f630b55a94703a6f00",
"text": "Learning generic and robust feature representations with data from multiple domains for the same problem is of great value, especially for the problems that have multiple datasets but none of them are large enough to provide abundant data variations. In this work, we present a pipeline for learning deep feature representations from multiple domains with Convolutional Neural Networks (CNNs). When training a CNN with data from all the domains, some neurons learn representations shared across several domains, while some others are effective only for a specific one. Based on this important observation, we propose a Domain Guided Dropout algorithm to improve the feature learning procedure. Experiments show the effectiveness of our pipeline and the proposed algorithm. Our methods on the person re-identification problem outperform stateof-the-art methods on multiple datasets by large margins.",
"title": ""
},
{
"docid": "2e8251644f82f3a965cf6360416eaaaa",
"text": "The past decade has witnessed a rapid proliferation of video cameras in all walks of life and has resulted in a tremendous explosion of video content. Several applications such as content-based video annotation and retrieval, highlight extraction and video summarization require recognition of the activities occurring in the video. The analysis of human activities in videos is an area with increasingly important consequences from security and surveillance to entertainment and personal archiving. Several challenges at various levels of processing-robustness against errors in low-level processing, view and rate-invariant representations at midlevel processing and semantic representation of human activities at higher level processing-make this problem hard to solve. In this review paper, we present a comprehensive survey of efforts in the past couple of decades to address the problems of representation, recognition, and learning of human activities from video and related applications. We discuss the problem at two major levels of complexity: 1) \"actions\" and 2) \"activities.\" \"Actions\" are characterized by simple motion patterns typically executed by a single human. \"Activities\" are more complex and involve coordinated actions among a small number of humans. We will discuss several approaches and classify them according to their ability to handle varying degrees of complexity as interpreted above. We begin with a discussion of approaches to model the simplest of action classes known as atomic or primitive actions that do not require sophisticated dynamical modeling. Then, methods to model actions with more complex dynamics are discussed. The discussion then leads naturally to methods for higher level representation of complex activities.",
"title": ""
},
{
"docid": "c3aaa53892e636f34d6923831a3b66bc",
"text": "OBJECTIVES\nTo evaluate whether 7-mm-long implants could be an alternative to longer implants placed in vertically augmented posterior mandibles.\n\n\nMATERIALS AND METHODS\nSixty patients with posterior mandibular edentulism with 7-8 mm bone height above the mandibular canal were randomized to either vertical augmentation with anorganic bovine bone blocks and delayed 5-month placement of ≥10 mm implants or to receive 7-mm-long implants. Four months after implant placement, provisional prostheses were delivered, replaced after 4 months, by definitive prostheses. The outcome measures were prosthesis and implant failures, any complications and peri-implant marginal bone levels. All patients were followed to 1 year after loading.\n\n\nRESULTS\nOne patient dropped out from the short implant group. In two augmented mandibles, there was not sufficient bone to place 10-mm-long implants possibly because the blocks had broken apart during insertion. One prosthesis could not be placed when planned in the 7 mm group vs. three prostheses in the augmented group, because of early failure of one implant in each patient. Four complications (wound dehiscence) occurred during graft healing in the augmented group vs. none in the 7 mm group. No complications occurred after implant placement. These differences were not statistically significant. One year after loading, patients of both groups lost an average of 1 mm of peri-implant bone. There no statistically significant differences in bone loss between groups.\n\n\nCONCLUSIONS\nWhen residual bone height over the mandibular canal is between 7 and 8 mm, 7 mm short implants might be a preferable choice than vertical augmentation, reducing the chair time, expenses and morbidity. These 1-year preliminary results need to be confirmed by follow-up of at least 5 years.",
"title": ""
},
{
"docid": "b8fa649e8b5a60a05aad257a0a364b51",
"text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.",
"title": ""
},
{
"docid": "7fd1ac60f18827dbe10bc2c10f715ae9",
"text": "Sentiment analysis in Twitter is a field that has recently attracted research interest. Twitter is one of the most popular microblog platforms on which users can publish their thoughts and opinions. Sentiment analysis in Twitter tackles the problem of analyzing the tweets in terms of the opinion they express. This survey provides an overview of the topic by investigating and briefly describing the algorithms that have been proposed for sentiment analysis in Twitter. The presented studies are categorized according to the approach they follow. In addition, we discuss fields related to sentiment analysis in Twitter including Twitter opinion retrieval, tracking sentiments over time, irony detection, emotion detection, and tweet sentiment quantification, tasks that have recently attracted increasing attention. Resources that have been used in the Twitter sentiment analysis literature are also briefly presented. The main contributions of this survey include the presentation of the proposed approaches for sentiment analysis in Twitter, their categorization according to the technique they use, and the discussion of recent research trends of the topic and its related fields.",
"title": ""
},
{
"docid": "658fbe3164e93515d4222e634b413751",
"text": "A prediction market is a place where individuals can wager on the outcomes of future events. Those who forecast the outcome correctly win money, and if they forecast incorrectly, they lose money. People value money, so they are incentivized to forecast such outcomes as accurately as they can. Thus, the price of a prediction market can serve as an excellent indicator of how likely an event is to occur [1, 2]. Augur is a decentralized platform for prediction markets. Our goal here is to provide a blueprint of a decentralized prediction market using Bitcoin’s input/output-style transactions. Many theoretical details of this project, such as its game-theoretic underpinning, are touched on lightly or not at all. This work builds on (and is intended to be read as a companion to) the theoretical foundation established in [3].",
"title": ""
},
{
"docid": "2f7e5807415398cb95f8f1ab36a0438f",
"text": "We present a Convolutional Neural Network (CNN) regression based framework for 2-D/3-D medical image registration, which directly estimates the transformation parameters from image features extracted from the DRR and the X-ray images using learned hierarchical regressors. Our framework consists of learning and application stages. In the learning stage, CNN regressors are trained using supervised machine learning to reveal the correlation between the transformation parameters and the image features. In the application stage, CNN regressors are applied on extracted image features in a hierarchical manner to estimate the transformation parameters. Our experiment results demonstrate that the proposed method can achieve real-time 2-D/3-D registration with very high (i.e., sub-milliliter) accuracy.",
"title": ""
},
{
"docid": "083cb6546aecdc12c2a1e36a9b8d9b67",
"text": "Machine translation systems achieve near human-level performance on some languages, yet their effectiveness strongly relies on the availability of large amounts of parallel sentences, which hinders their applicability to the majority of language pairs. This work investigates how to learn to translate when having access to only large monolingual corpora in each language. We propose two model variants, a neural and a phrase-based model. Both versions leverage a careful initialization of the parameters, the denoising effect of language models and automatic generation of parallel data by iterative back-translation. These models are significantly better than methods from the literature, while being simpler and having fewer hyper-parameters. On the widely used WMT’14 English-French and WMT’16 German-English benchmarks, our models respectively obtain 28.1 and 25.2 BLEU points without using a single parallel sentence, outperforming the state of the art by more than 11 BLEU points. On low-resource languages like English-Urdu and English-Romanian, our methods achieve even better results than semisupervised and supervised approaches leveraging the paucity of available bitexts. Our code for NMT and PBSMT is publicly available.1",
"title": ""
},
{
"docid": "12f5447d9e83890c3e953e03a2e92c8f",
"text": "BACKGROUND\nLong-term continuous systolic blood pressure (SBP) and heart rate (HR) monitors are of tremendous value to medical (cardiovascular, circulatory and cerebrovascular management), wellness (emotional and stress tracking) and fitness (performance monitoring) applications, but face several major impediments, such as poor wearability, lack of widely accepted robust SBP models and insufficient proofing of the generalization ability of calibrated models.\n\n\nMETHODS\nThis paper proposes a wearable cuff-less electrocardiography (ECG) and photoplethysmogram (PPG)-based SBP and HR monitoring system and many efforts are made focusing on above challenges. Firstly, both ECG/PPG sensors are integrated into a single-arm band to provide a super wearability. A highly convenient but challenging single-lead configuration is proposed for weak single-arm-ECG acquisition, instead of placing the electrodes on the chest, or two wrists. Secondly, to identify heartbeats and estimate HR from the motion artifacts-sensitive weak arm-ECG, a machine learning-enabled framework is applied. Then ECG-PPG heartbeat pairs are determined for pulse transit time (PTT) measurement. Thirdly, a PTT&HR-SBP model is applied for SBP estimation, which is also compared with many PTT-SBP models to demonstrate the necessity to introduce HR information in model establishment. Fourthly, the fitted SBP models are further evaluated on the unseen data to illustrate the generalization ability. A customized hardware prototype was established and a dataset collected from ten volunteers was acquired to evaluate the proof-of-concept system.\n\n\nRESULTS\nThe semi-customized prototype successfully acquired from the left upper arm the PPG signal, and the weak ECG signal, the amplitude of which is only around 10% of that of the chest-ECG. The HR estimation has a mean absolute error (MAE) and a root mean square error (RMSE) of only 0.21 and 1.20 beats per min, respectively. Through the comparative analysis, the PTT&HR-SBP models significantly outperform the PTT-SBP models. The testing performance is 1.63 ± 4.44, 3.68, 4.71 mmHg in terms of mean error ± standard deviation, MAE and RMSE, respectively, indicating a good generalization ability on the unseen fresh data.\n\n\nCONCLUSIONS\nThe proposed proof-of-concept system is highly wearable, and its robustness is thoroughly evaluated on different modeling strategies and also the unseen data, which are expected to contribute to long-term pervasive hypertension, heart health and fitness management.",
"title": ""
},
{
"docid": "7e74cc21787c1e21fd64a38f1376c6a9",
"text": "The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.",
"title": ""
},
{
"docid": "6bd7a3d4b330972328257d958ec2730e",
"text": "Structured sparse coding and the related structured dictionary learning problems are novel research areas in machine learning. In this paper we present a new application of structured dictionary learning for collaborative filtering based recommender systems. Our extensive numerical experiments demonstrate that the presented method outperforms its state-of-the-art competitors and has several advantages over approaches that do not put structured constraints on the dictionary elements.",
"title": ""
},
{
"docid": "67b5bd59689c325365ac765a17886169",
"text": "L-Systems have traditionally been used as a popular method for the modelling of spacefilling curves, biological systems and morphogenesis. In this paper, we adapt string rewriting grammars based on L-Systems into a system for music composition. Representation of pitch, duration and timbre are encoded as grammar symbols, upon which a series of re-writing rules are applied. Parametric extensions to the grammar allow the specification of continuous data for the purposes of modulation and control. Such continuous data is also under control of the grammar. Using non-deterministic grammars with context sensitivity allows the simulation of Nth-order Markov models with a more economical representation than transition matrices and greater flexibility than previous composition models based on finite state automata or Petri nets. Using symbols in the grammar to represent relationships between notes, (rather than absolute notes) in combination with a hierarchical grammar representation, permits the emergence of complex music compositions from a relatively simple grammars.",
"title": ""
},
{
"docid": "ee37a743edd1b87d600dcf2d0050ca18",
"text": "Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users' personalized items or services. The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy. In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users' feedback. Users' feedback can be positive and negative and both types of feedback have great potentials to boost recommendations. However, the number of negative feedback is much larger than that of positive one; thus incorporating them simultaneously is challenging since positive feedback could be buried by negative one. In this paper, we develop a novel approach to incorporate them into the proposed deep recommender system (DEERS) framework. The experimental results based on real-world e-commerce data demonstrate the effectiveness of the proposed framework. Further experiments have been conducted to understand the importance of both positive and negative feedback in recommendations.",
"title": ""
},
{
"docid": "4737fe7f718f79c74595de40f8778da2",
"text": "In this paper we describe a method of procedurally generating maps using Markov chains. This method learns statistical patterns from human-authored maps, which are assumed to be of high quality. Our method then uses those learned patterns to generate new maps. We present a collection of strategies both for training the Markov chains, and for generating maps from such Markov chains. We then validate our approach using the game Super Mario Bros., by evaluating the quality of the produced maps based on different configurations for training and generation.",
"title": ""
},
{
"docid": "e8f46d6e58c070965f83ca244e15c3d6",
"text": "OBJECTIVES\nUrinalysis is one of the most commonly performed tests in the clinical laboratory. However, manual microscopic sediment examination is labor-intensive, time-consuming, and lacks standardization in high-volume laboratories. In this study, the concordance of analyses between manual microscopic examination and two different automatic urine sediment analyzers has been evaluated.\n\n\nDESIGN AND METHODS\n209 urine samples were analyzed by the Iris iQ200 ELITE (İris Diagnostics, USA), Dirui FUS-200 (DIRUI Industrial Co., China) automatic urine sediment analyzers and by manual microscopic examination. The degree of concordance (Kappa coefficient) and the rates within the same grading were evaluated.\n\n\nRESULTS\nFor erythrocytes, leukocytes, epithelial cells, bacteria, crystals and yeasts, the degree of concordance between the two instruments was better than the degree of concordance between the manual microscopic method and the individual devices. There was no concordance between all methods for casts.\n\n\nCONCLUSION\nThe results from the automated analyzers for erythrocytes, leukocytes and epithelial cells were similar to the result of microscopic examination. However, in order to avoid any error or uncertainty, some images (particularly: dysmorphic cells, bacteria, yeasts, casts and crystals) have to be analyzed by manual microscopic examination by trained staff. Therefore, the software programs which are used in automatic urine sediment analysers need further development to recognize urinary shaped elements more accurately. Automated systems are important in terms of time saving and standardization.",
"title": ""
},
{
"docid": "bcbba4f99e33ac0daea893e280068304",
"text": "Arterial plasma glucose values throughout a 24-h period average approximately 90 mg/dl, with a maximal concentration usually not exceeding 165 mg/dl such as after meal ingestion1 and remaining above 55 mg/dl such as after exercise2 or a moderate fast (60 h).3 This relative stability contrasts with the situation for other substrates such as glycerol, lactate, free fatty acids, and ketone bodies whose fluctuations are much wider (Table 2.1).4 This narrow range defining normoglycemia is maintained through an intricate regulatory and counterregulatory neuro-hormonal system: A decrement in plasma glucose as little as 20 mg/dl (from 90 to 70 mg/dl) will suppress the release of insulin and will decrease glucose uptake in certain areas in the brain (e.g., hypothalamus where glucose sensors are located); this will activate the sympathetic nervous system and trigger the release of counterregulatory hormones (glucagon, catecholamines, cortisol, and growth hormone).5 All these changes will increase glucose release into plasma and decrease its removal so as to restore normoglycemia. On the other hand, a 10 mg/dl increment in plasma glucose will stimulate insulin release and suppress glucagon secretion to prevent further increments and restore normoglycemia. Glucose in plasma either comes from dietary sources or is either the result of the breakdown of glycogen in liver (glycogenolysis) or the formation of glucose in liver and kidney from other carbons compounds (precursors) such as lactate, pyruvate, amino acids, and glycerol (gluconeogenesis). In humans, glucose removed from plasma may have different fates in different tissues and under different conditions (e.g., postabsorptive vs. postprandial), but the pathways for its disposal are relatively limited. It (1) may be immediately stored as glycogen or (2) may undergo glycolysis, which can be non-oxidative producing pyruvate (which can be reduced to lactate or transaminated to form alanine) or oxidative through conversion to acetyl CoA which is further oxidized through the tricarboxylic acid cycle to form carbon dioxide and water. Non-oxidative glycolysis carbons undergo gluconeogenesis and the newly formed glucose is either stored as glycogen or released back into plasma (Fig. 2.1).",
"title": ""
},
{
"docid": "e17a1429f4ca9de808caaa842ee5a441",
"text": "Large scale visual understanding is challenging, as it requires a model to handle the widely-spread and imbalanced distribution of 〈subject, relation, object〉 triples. In real-world scenarios with large numbers of objects and relations, some are seen very commonly while others are barely seen. We develop a new relationship detection model that embeds objects and relations into two vector spaces where both discriminative capability and semantic affinity are preserved. We learn a visual and a semantic module that map features from the two modalities into a shared space, where matched pairs of features have to discriminate against those unmatched, but also maintain close distances to semantically similar ones. Benefiting from that, our model can achieve superior performance even when the visual entity categories scale up to more than 80, 000, with extremely skewed class distribution. We demonstrate the efficacy of our model on a large and imbalanced benchmark based of Visual Genome that comprises 53, 000+ objects and 29, 000+ relations, a scale at which no previous work has been evaluated at. We show superiority of our model over competitive baselines on the original Visual Genome dataset with 80, 000+ categories. We also show state-of-the-art performance on the VRD dataset and the scene graph dataset which is a subset of Visual Genome with 200 categories.",
"title": ""
},
{
"docid": "486e3f5614f69f60d8703d8641c73416",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "754163e498679e1d3c1449424c03a71f",
"text": "J. K. Strosnider P. Nandi S. Kumaran S. Ghosh A. Arsanjani The current approach to the design, maintenance, and governance of service-oriented architecture (SOA) solutions has focused primarily on flow-driven assembly and orchestration of reusable service components. The practical application of this approach in creating industry solutions has been limited, because flow-driven assembly and orchestration models are too rigid and static to accommodate complex, real-world business processes. Furthermore, the approach assumes a rich, easily configured library of reusable service components when in fact the development, maintenance, and governance of these libraries is difficult. An alternative approach pioneered by the IBM Research Division, model-driven business transformation (MDBT), uses a model-driven software synthesis technology to automatically generate production-quality business service components from high-level business process models. In this paper, we present the business entity life cycle analysis (BELA) technique for MDBT-based SOA solution realization and its integration into serviceoriented modeling and architecture (SOMA), the end-to-end method from IBM for SOA application and solution development. BELA shifts the process-modeling paradigm from one that is centered on activities to one that is centered on entities. BELA teams process subject-matter experts with IT and data architects to identify and specify business entities and decompose business processes. Supporting synthesis tools then automatically generate the interacting business entity service components and their associated data stores and service interface definitions. We use a large-scale project as an example demonstrating the benefits of this innovation, which include an estimated 40 percent project cost reduction and an estimated 20 percent reduction in cycle time when compared with conventional SOA approaches.",
"title": ""
}
] | scidocsrr |
111e970b027530331ee4320b8ecbc49f | Selection of K in K-means clustering | [
{
"docid": "ed9e22167d3e9e695f67e208b891b698",
"text": "ÐIn k-means clustering, we are given a set of n data points in d-dimensional space R and an integer k and the problem is to determine a set of k points in R, called centers, so as to minimize the mean squared distance from each data point to its nearest center. A popular heuristic for k-means clustering is Lloyd's algorithm. In this paper, we present a simple and efficient implementation of Lloyd's k-means clustering algorithm, which we call the filtering algorithm. This algorithm is easy to implement, requiring a kd-tree as the only major data structure. We establish the practical efficiency of the filtering algorithm in two ways. First, we present a data-sensitive analysis of the algorithm's running time, which shows that the algorithm runs faster as the separation between clusters increases. Second, we present a number of empirical studies both on synthetically generated data and on real data sets from applications in color quantization, data compression, and image segmentation. Index TermsÐPattern recognition, machine learning, data mining, k-means clustering, nearest-neighbor searching, k-d tree, computational geometry, knowledge discovery.",
"title": ""
},
{
"docid": "3e44a5c966afbeabff11b54bafcefdce",
"text": "In this paper, we aim to compare empirically four initialization methods for the K-Means algorithm: random, Forgy, MacQueen and Kaufman. Although this algorithm is known for its robustness, it is widely reported in literature that its performance depends upon two key points: initial clustering and instance order. We conduct a series of experiments to draw up (in terms of mean, maximum, minimum and standard deviation) the probability distribution of the square-error values of the nal clusters returned by the K-Means algorithm independently on any initial clustering and on any instance order when each of the four initialization methods is used. The results of our experiments illustrate that the random and the Kauf-man initialization methods outperform the rest of the compared methods as they make the K-Means more eeective and more independent on initial clustering and on instance order. In addition, we compare the convergence speed of the K-Means algorithm when using each of the four initialization methods. Our results suggest that the Kaufman initialization method induces to the K-Means algorithm a more desirable behaviour with respect to the convergence speed than the random initial-ization method.",
"title": ""
},
{
"docid": "651d048aaae1ce1608d3d9f0f09d4b9b",
"text": "We investigate here the behavior of the standard k-means clustering algorithm and several alternatives to it: the k-harmonic means algorithm due to Zhang and colleagues, fuzzy k-means, Gaussian expectation-maximization, and two new variants of k-harmonic means. Our aim is to find which aspects of these algorithms contribute to finding good clusterings, as opposed to converging to a low-quality local optimum. We describe each algorithm in a unified framework that introduces separate cluster membership and data weight functions. We then show that the algorithms do behave very differently from each other on simple low-dimensional synthetic datasets and image segmentation tasks, and that the k-harmonic means method is superior. Having a soft membership function is essential for finding high-quality clusterings, but having a non-constant data weight function is useful also.",
"title": ""
}
] | [
{
"docid": "1e042aca14a3412a4772761109cb6c10",
"text": "With increasing quality requirements for multimedia communications, audio codecs must maintain both high quality and low delay. Typically, audio codecs offer either low delay or high quality, but rarely both. We propose a codec that simultaneously addresses both these requirements, with a delay of only 8.7 ms at 44.1 kHz. It uses gain-shape algebraic vector quantization in the frequency domain with time-domain pitch prediction. We demonstrate that the proposed codec operating at 48 kb/s and 64 kb/s out-performs both G.722.1C and MP3 and has quality comparable to AAC-LD, despite having less than one fourth of the algorithmic delay of these codecs.",
"title": ""
},
{
"docid": "0dc3c4e628053e8f7c32c0074a2d1a59",
"text": "Understanding inter-character relationships is fundamental for understanding character intentions and goals in a narrative. This paper addresses unsupervised modeling of relationships between characters. We model relationships as dynamic phenomenon, represented as evolving sequences of latent states empirically learned from data. Unlike most previous work our approach is completely unsupervised. This enables data-driven inference of inter-character relationship types beyond simple sentiment polarities, by incorporating lexical and semantic representations, and leveraging large quantities of raw text. We present three models based on rich sets of linguistic features that capture various cues about relationships. We compare these models with existing techniques and also demonstrate that relationship categories learned by our model are semantically coherent.",
"title": ""
},
{
"docid": "9c7d3937b25c6be6480d52dec14bb4d5",
"text": "Worldwide the pros and cons of games and social behaviour are discussed. In Western countries the discussion is focussing on violent game and media content; in Japan on intensive game usage and the impact on the intellectual development of children. A lot is already discussed on the harmful and negative effects of entertainment technology on human behaviour, therefore we decided to focus primarily on the positive effects. Based on an online document search we could find and select 393 online available publications according the following categories: meta review (N=34), meta analysis (N=13), literature review (N=38), literature survey (N=36), empirical study (N=91), survey study (N=44), design study (N=91), any other document (N=46). In this paper a first preliminary overview over positive effects of entertainment technology on human behaviour is presented and discussed. The drawn recommendations can support developers and designers in entertainment industry.",
"title": ""
},
{
"docid": "9a86609ecefc5780a49ca638be4de64c",
"text": "In this paper, we propose an end-to-end capsule network for pixel level localization of actors and actions present in a video. The localization is performed based on a natural language query through which an actor and action are specified. We propose to encode both the video as well as textual input in the form of capsules, which provide more effective representation in comparison with standard convolution based features. We introduce a novel capsule based attention mechanism for fusion of video and text capsules for text selected video segmentation. The attention mechanism is performed via joint EM routing over video and text capsules for text selected actor and action localization. The existing works on actor-action localization are mainly focused on localization in a single frame instead of the full video. Different from existing works, we propose to perform the localization on all frames of the video. To validate the potential of the proposed network for actor and action localization on all the frames of a video, we extend an existing actor-action dataset (A2D) with annotations for all the frames. The experimental evaluation demonstrates the effectiveness of the proposed capsule network for text selective actor and action localization in videos, and it also improves upon the performance of the existing state-of-the art works on single frame-based localization. Figure 1: Overview of the proposed approach. For a given video, we want to localize the actor and action which are described by an input textual query. Capsules are extracted from both the video and the textual query, and a joint EM routing algorithm creates high level capsules, which are further used for localization of selected actors and actions.",
"title": ""
},
{
"docid": "5208762a8142de095c21824b0a395b52",
"text": "Battery storage (BS) systems are static energy conversion units that convert the chemical energy directly into electrical energy. They exist in our cars, laptops, electronic appliances, micro electricity generation systems and in many other mobile to stationary power supply systems. The economic advantages, partial sustainability and the portability of these units pose promising substitutes for backup power systems for hybrid vehicles and hybrid electricity generation systems. Dynamic behaviour of these systems can be analysed by using mathematical modeling and simulation software programs. Though, there have been many mathematical models presented in the literature and proved to be successful, dynamic simulation of these systems are still very exhaustive and time consuming as they do not behave according to specific mathematical models or functions. The charging and discharging of battery functions are a combination of exponential and non-linear nature. The aim of this research paper is to present a suitable convenient, dynamic battery model that can be used to model a general BS system. Proposed model is a new modified dynamic Lead-Acid battery model considering the effect of temperature and cyclic charging and discharging effects. Simulink has been used to study the characteristics of the system and the proposed system has proved to be very successful as the simulation results have been very good. Keywords—Simulink Matlab, Battery Model, Simulation, BS Lead-Acid, Dynamic modeling, Temperature effect, Hybrid Vehicles.",
"title": ""
},
{
"docid": "e1f531740891d47387a2fc2ef4f71c46",
"text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"title": ""
},
{
"docid": "6c4944ebd75404a0f3b2474e346677f1",
"text": "Wireless industry nowadays is facing two major challenges: 1) how to support the vertical industry applications so that to expand the wireless industry market and 2) how to further enhance device capability and user experience. In this paper, we propose a technology framework to address these challenges. The proposed technology framework is based on end-to-end vertical and horizontal slicing, where vertical slicing enables vertical industry and services and horizontal slicing improves system capacity and user experience. The technology development on vertical slicing has already started in late 4G and early 5G and is mostly focused on slicing the core network. We envision this trend to continue with the development of vertical slicing in the radio access network and the air interface. Moving beyond vertical slicing, we propose to horizontally slice the computation and communication resources to form virtual computation platforms for solving the network capacity scaling problem and enhancing device capability and user experience. In this paper, we explain the concept of vertical and horizontal slicing and illustrate the slicing techniques in the air interface, the radio access network, the core network and the computation platform. This paper aims to initiate the discussion on the long-range technology roadmap and spur development on the solutions for E2E network slicing in 5G and beyond.",
"title": ""
},
{
"docid": "bc6fc806fefc8298b8969f7a5f5b9e8b",
"text": "Short text is usually expressed in refined slightly, insufficient information, which makes text classification difficult. But we can try to introduce some information from the existing knowledge base to strengthen the performance of short text classification. Wikipedia [2,13,15] is now the largest human-edited knowledge base of high quality. It would benefit to short text classification if we can make full use of Wikipedia information in short text classification. This paper presents a new concept based [22] on Wikipedia short text representation method, by identifying the concept of Wikipedia mentioned in short text, and then expand the concept of wiki correlation and short text messages to the feature vector representation.",
"title": ""
},
{
"docid": "50e7ca7394db235909d657495bb11de2",
"text": "Radar is an attractive technology for long term monitoring of human movement as it operates remotely, can be placed behind walls and is able to monitor a large area depending on its operating parameters. A radar signal reflected off a moving person carries rich information on his or her activity pattern in the form of a set of Doppler frequency signatures produced by the specific combination of limbs and torso movements. To enable classification and efficient storage and transmission of movement data, unique parameters have to be extracted from the Doppler signatures. Two of the most important human movement parameters for activity identification and classification are the velocity profile and the fundamental cadence frequency of the movement pattern. However, the complicated pattern of limbs and torso movement worsened by multipath propagation in indoor environment poses a challenge for the extraction of these human movement parameters. In this paper, three new approaches for the estimation of human walking velocity profile in indoor environment are proposed and discussed. The first two methods are based on spectrogram estimates whereas the third method is based on phase difference computation. In addition, a method to estimate the fundamental cadence frequency of the gait is suggested and discussed. The accuracy of the methods are evaluated and compared in an indoor experiment using a flexible and low-cost software defined radar platform. The results obtained indicate that the velocity estimation methods are able to estimate the velocity profile of the person’s translational motion with an error of less than 10%. The results also showed that the fundamental cadence is estimated with an error of 7%.",
"title": ""
},
{
"docid": "90d5aca626d61806c2af3cc551b28c90",
"text": "This paper presents two novel approaches to increase performance bounds of image steganography under the criteria of minimizing distortion. First, in order to efficiently use the images’ capacities, we propose using parallel images in the embedding stage. The result is then used to prove sub-optimality of the message distribution technique used by all cost based algorithms including HUGO, S-UNIWARD, and HILL. Second, a new distribution approach is presented to further improve the security of these algorithms. Experiments show that this distribution method avoids embedding in smooth regions and thus achieves a better performance, measured by state-of-the-art steganalysis, when compared with the current used distribution.",
"title": ""
},
{
"docid": "a70475e2799b0a439e63382abcd90bd4",
"text": "Nonabelian group-based public key cryptography is a relatively new and exciting research field. Rapidly increasing computing power and the futurity quantum computers [52] that have since led to, the security of public key cryptosystems in use today, will be questioned. Research in new cryptographic methods is also imperative. Research on nonabelian group-based cryptosystems will become one of contemporary research priorities. Many innovative ideas for them have been presented for the past two decades, and many corresponding problems remain to be resolved. The purpose of this paper, is to present a survey of the nonabelian group-based public key cryptosystems with the corresponding problems of security. We hope that readers can grasp the trend that is examined in this study.",
"title": ""
},
{
"docid": "1a59bf4467e73a6cae050e5670dbf4fa",
"text": "BACKGROUND\nNivolumab combined with ipilimumab resulted in longer progression-free survival and a higher objective response rate than ipilimumab alone in a phase 3 trial involving patients with advanced melanoma. We now report 3-year overall survival outcomes in this trial.\n\n\nMETHODS\nWe randomly assigned, in a 1:1:1 ratio, patients with previously untreated advanced melanoma to receive nivolumab at a dose of 1 mg per kilogram of body weight plus ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses, followed by nivolumab at a dose of 3 mg per kilogram every 2 weeks; nivolumab at a dose of 3 mg per kilogram every 2 weeks plus placebo; or ipilimumab at a dose of 3 mg per kilogram every 3 weeks for four doses plus placebo, until progression, the occurrence of unacceptable toxic effects, or withdrawal of consent. Randomization was stratified according to programmed death ligand 1 (PD-L1) status, BRAF mutation status, and metastasis stage. The two primary end points were progression-free survival and overall survival in the nivolumab-plus-ipilimumab group and in the nivolumab group versus the ipilimumab group.\n\n\nRESULTS\nAt a minimum follow-up of 36 months, the median overall survival had not been reached in the nivolumab-plus-ipilimumab group and was 37.6 months in the nivolumab group, as compared with 19.9 months in the ipilimumab group (hazard ratio for death with nivolumab plus ipilimumab vs. ipilimumab, 0.55 [P<0.001]; hazard ratio for death with nivolumab vs. ipilimumab, 0.65 [P<0.001]). The overall survival rate at 3 years was 58% in the nivolumab-plus-ipilimumab group and 52% in the nivolumab group, as compared with 34% in the ipilimumab group. The safety profile was unchanged from the initial report. Treatment-related adverse events of grade 3 or 4 occurred in 59% of the patients in the nivolumab-plus-ipilimumab group, in 21% of those in the nivolumab group, and in 28% of those in the ipilimumab group.\n\n\nCONCLUSIONS\nAmong patients with advanced melanoma, significantly longer overall survival occurred with combination therapy with nivolumab plus ipilimumab or with nivolumab alone than with ipilimumab alone. (Funded by Bristol-Myers Squibb and others; CheckMate 067 ClinicalTrials.gov number, NCT01844505 .).",
"title": ""
},
{
"docid": "3b2ddbef9ee3e5db60e2b315064a02c3",
"text": "It is indispensable to understand and analyze industry structure and company relations from documents, such as news articles, in order to make management decisions concerning supply chains, selection of business partners, etc. Analysis of company relations from news articles requires both a macro-viewpoint, e.g., overviewing competitor groups, and a micro-viewpoint, e.g., grasping the descriptions of the relationship between a specific pair of companies collaborating. Research has typically focused on only the macro-viewpoint, classifying each company pair into a specific relation type. In this paper, to support company relation analysis from both macro-and micro-viewpoints, we propose a method that extracts collaborative/competitive company pairs from individual sentences in Web news articles by applying a Markov logic network and gather extracted relations from each company pair. By this method, we are able not only to perform clustering of company pairs into competitor groups based on the dominant relations of each pair (macro-viewpoint) but also to know how each company pair is described in individual sentences (micro-viewpoint). We empirically confirmed that the proposed method is feasible through analysis of 4,661 Web news articles on the semiconductor and related industries.",
"title": ""
},
{
"docid": "d0bb31d79a7c93f67f7d11d6abee50cb",
"text": "The chapter introduces the book explaining its purposes and significance, framing it within the current literature related to Location-Based Mobile Games. It further clarifies the methodology of the study on the ground of this work and summarizes the content of each chapter.",
"title": ""
},
{
"docid": "b73c1b51f0f74c3b27b8d3d58c14e600",
"text": "Water balance of the terrestrial isopod, Armadillidium vulgare, was investigated during conglobation (rolling-up behavior). Water loss and metabolic rates were measured at 18 +/- 1 degrees C in dry air using flow-through respirometry. Water-loss rates decreased 34.8% when specimens were in their conglobated form, while CO2 release decreased by 37.1%. Water loss was also measured gravimetrically at humidities ranging from 6 to 75 %RH. Conglobation was associated with a decrease in water-loss rates up to 53 %RH, but no significant differences were observed at higher humidities. Our findings suggest that conglobation behavior may help to conserve water, in addition to its demonstrated role in protection from predation.",
"title": ""
},
{
"docid": "1d04def7d22e9f915d825551aa10b077",
"text": "Recent advances in wireless networking technologies and the growing success of mobile computing devices, such as laptop computers, third generation mobile phones, personal digital assistants, watches and the like, are enabling new classes of applications that present challenging problems to designers. Mobile devices face temporary loss of network connectivity when they move; they are likely to have scarce resources, such as low battery power, slow CPU speed and little memory; they are required to react to frequent and unannounced changes in the environment, such as high variability of network bandwidth, and in the remote resources availability, and so on. To support designers building mobile applications, research in the field of middleware systems has proliferated. Middleware aims at facilitating communication and coordination of distributed components, concealing difficulties raised by mobility from application engineers as much as possible. In this survey, we examine characteristics of mobile distributed systems and distinguish them from their fixed counterpart. We introduce a framework and a categorization of the various middleware systems designed to support mobility, and we present a detailed and comparative review of the major results reached in this field. An analysis of current trends inside the mobile middleware community and a discussion of further directions of research conclude the survey.",
"title": ""
},
{
"docid": "27ea4d25d672b04632c53c711afe0ceb",
"text": "Many advancements have been taking place in unmanned aerial vehicle (UAV) technology lately. This is leading towards the design and development of UAVs with various sizes that possess increased on-board processing, memory, storage, and communication capabilities. Consequently, UAVs are increasingly being used in a vast amount of commercial, military, civilian, agricultural, and environmental applications. However, to take full advantages of their services, these UAVs must be able to communicate efficiently with each other using UAV-to-UAV (U2U) communication and with existing networking infrastructures using UAV-to-Infrastructure (U2I) communication. In this paper, we identify the functions, services and requirements of UAV-based communication systems. We also present networking architectures, underlying frameworks, and data traffic requirements in these systems as well as outline the various protocols and technologies that can be used at different UAV communication links and networking layers. In addition, the paper discusses middleware layer services that can be provided in order to provide seamless communication and support heterogeneous network interfaces. Furthermore, we discuss a new important area of research, which involves the use of UAVs in collecting data from wireless sensor networks (WSNs). We discuss and evaluate several approaches that can be used to collect data from different types of WSNs including topologies such as linear sensor networks (LSNs), geometric and clustered WSNs. We outline the benefits of using UAVs for this function, which include significantly decreasing sensor node energy consumption, lower interference, and offers considerably increased flexibility in controlling the density of the deployed nodes since the need for the multihop approach for sensor-tosink communication is either eliminated or significantly reduced. Consequently, UAVs can provide good connectivity to WSN clusters.",
"title": ""
},
{
"docid": "c9398b3dad75ba85becbec379a65a219",
"text": "Passwords are still the predominant mode of authentication in contemporary information systems, despite a long list of problems associated with their insecurity. Their primary advantage is the ease of use and the price of implementation, compared to other systems of authentication (e.g. two-factor, biometry, …). In this paper we present an analysis of passwords used by students of one of universities and their resilience against brute force and dictionary attacks. The passwords were obtained from a university's computing center in plaintext format for a very long period - first passwords were created before 1980. The results show that early passwords are extremely easy to crack: the percentage of cracked passwords is above 95 % for those created before 2006. Surprisingly, more than 40 % of passwords created in 2014 were easily broken within a few hours. The results show that users - in our case students, despite positive trends, still choose easy to break passwords. This work contributes to loud warnings that a shift from traditional password schemes to more elaborate systems is needed.",
"title": ""
},
{
"docid": "ae8f26a5ab75e11f86d295c2beaa2189",
"text": "BACKGROUND\nThe neonatal and pediatric antimicrobial point prevalence survey (PPS) of the Antibiotic Resistance and Prescribing in European Children project (http://www.arpecproject.eu/) aims to standardize a method for surveillance of antimicrobial use in children and neonates admitted to the hospital within Europe. This article describes the audit criteria used and reports overall country-specific proportions of antimicrobial use. An analytical review presents methodologies on antimicrobial use.\n\n\nMETHODS\nA 1-day PPS on antimicrobial use in hospitalized children was organized in September 2011, using a previously validated and standardized method. The survey included all inpatient pediatric and neonatal beds and identified all children receiving an antimicrobial treatment on the day of survey. Mandatory data were age, gender, (birth) weight, underlying diagnosis, antimicrobial agent, dose and indication for treatment. Data were entered through a web-based system for data-entry and reporting, based on the WebPPS program developed for the European Surveillance of Antimicrobial Consumption project.\n\n\nRESULTS\nThere were 2760 and 1565 pediatric versus 1154 and 589 neonatal inpatients reported among 50 European (n = 14 countries) and 23 non-European hospitals (n = 9 countries), respectively. Overall, antibiotic pediatric and neonatal use was significantly higher in non-European (43.8%; 95% confidence interval [CI]: 41.3-46.3% and 39.4%; 95% CI: 35.5-43.4%) compared with that in European hospitals (35.4; 95% CI: 33.6-37.2% and 21.8%; 95% CI: 19.4-24.2%). Proportions of antibiotic use were highest in hematology/oncology wards (61.3%; 95% CI: 56.2-66.4%) and pediatric intensive care units (55.8%; 95% CI: 50.3-61.3%).\n\n\nCONCLUSIONS\nAn Antibiotic Resistance and Prescribing in European Children standardized web-based method for a 1-day PPS was successfully developed and conducted in 73 hospitals worldwide. It offers a simple, feasible and sustainable way of data collection that can be used globally.",
"title": ""
}
] | scidocsrr |
d3b2ea56837b774bdd1ba56a171bd547 | Automating image segmentation verification and validation by learning test oracles | [
{
"docid": "a5fc5e1bf35863d030b20c219732bc2b",
"text": "Measures of overlap of labelled regions of images, such as the Dice and Tanimoto coefficients, have been extensively used to evaluate image registration and segmentation algorithms. Modern studies can include multiple labels defined on multiple images yet most evaluation schemes report one overlap per labelled region, simply averaged over multiple images. In this paper, common overlap measures are generalized to measure the total overlap of ensembles of labels defined on multiple test images and account for fractional labels using fuzzy set theory. This framework allows a single \"figure-of-merit\" to be reported which summarises the results of a complex experiment by image pair, by label or overall. A complementary measure of error, the overlap distance, is defined which captures the spatial extent of the nonoverlapping part and is related to the Hausdorff distance computed on grey level images. The generalized overlap measures are validated on synthetic images for which the overlap can be computed analytically and used as similarity measures in nonrigid registration of three-dimensional magnetic resonance imaging (MRI) brain images. Finally, a pragmatic segmentation ground truth is constructed by registering a magnetic resonance atlas brain to 20 individual scans, and used with the overlap measures to evaluate publicly available brain segmentation algorithms",
"title": ""
},
{
"docid": "892cfde6defce89783f0c290df4822f2",
"text": "Metamorphic testing has been shown to be a simple yet effective technique in addressing the quality assurance of applications that do not have test oracles, i.e., for which it is difficult or impossible to know what the correct output should be for arbitrary input. In metamorphic testing, existing test case input is modified to produce new test cases in such a manner that, when given the new input, the application should produce an output that can easily be computed based on the original output. That is, if input x produces output f(x), then we create input x' such that we can predict f(x') based on f(x); if the application does not produce the expected output, then a defect must exist, and either f(x), or f(x') (or both) is wrong.\n In practice, however, metamorphic testing can be a manually intensive technique for all but the simplest cases. The transformation of input data can be laborious for large data sets, or practically impossible for input that is not in human-readable format. Similarly, comparing the outputs can be error-prone for large result sets, especially when slight variations in the results are not actually indicative of errors (i.e., are false positives), for instance when there is non-determinism in the application and multiple outputs can be considered correct.\n In this paper, we present an approach called Automated Metamorphic System Testing. This involves the automation of metamorphic testing at the system level by checking that the metamorphic properties of the entire application hold after its execution. The tester is able to easily set up and conduct metamorphic tests with little manual intervention, and testing can continue in the field with minimal impact on the user. Additionally, we present an approach called Heuristic Metamorphic Testing which seeks to reduce false positives and address some cases of non-determinism. We also describe an implementation framework called Amsterdam, and present the results of empirical studies in which we demonstrate the effectiveness of the technique on real-world programs without test oracles.",
"title": ""
},
{
"docid": "d4aaea0107cbebd7896f4cb57fa39c05",
"text": "A novel method is proposed for performing multilabel, interactive image segmentation. Given a small number of pixels with user-defined (or predefined) labels, one can analytically and quickly determine the probability that a random walker starting at each unlabeled pixel will first reach one of the prelabeled pixels. By assigning each pixel to the label for which the greatest probability is calculated, a high-quality image segmentation may be obtained. Theoretical properties of this algorithm are developed along with the corresponding connections to discrete potential theory and electrical circuits. This algorithm is formulated in discrete space (i.e., on a graph) using combinatorial analogues of standard operators and principles from continuous potential theory, allowing it to be applied in arbitrary dimension on arbitrary graphs",
"title": ""
}
] | [
{
"docid": "69d42340c09303b69eafb19de7170159",
"text": "Based on an example of translational motion, this report shows how to model and initialize the Kalman Filter. Basic rules about physical motion are introduced to point out, that the well-known laws of physical motion are a mere approximation. Hence, motion of non-constant velocity or acceleration is modelled by additional use of white noise. Special attention is drawn to the matrix initialization for use in the Kalman Filter, as, in general, papers and books do not give any hint on this; thus inducing the impression that initializing is not important and may be arbitrary. For unknown matrices many users of the Kalman Filter choose the unity matrix. Sometimes it works, sometimes it does not. In order to close this gap, initialization is shown on the example of human interactive motion. In contrast to measuring instruments with documented measurement errors in manuals, the errors generated by vision-based sensoring must be estimated carefully. Of course, the described methods may be adapted to other circumstances.",
"title": ""
},
{
"docid": "47501c171c7b3f8e607550c958852be1",
"text": "Fundus images provide an opportunity for early detection of diabetes. Generally, retina fundus images of diabetic patients exhibit exudates, which are lesions indicative of Diabetic Retinopathy (DR). Therefore, computational tools can be considered to be used in assisting ophthalmologists and medical doctor for the early screening of the disease. Hence in this paper, we proposed visualisation of exudates in fundus images using radar chart and Color Auto Correlogram (CAC) technique. The proposed technique requires that the Optic Disc (OD) from the fundus image be removed. Next, image normalisation was performed to standardise the colors in the fundus images. The exudates from the modified image are then extracted using Artificial Neural Network (ANN) and visualised using radar chart and CAC technique. The proposed technique was tested on 149 images of the publicly available MESSIDOR database. Experimental results suggest that the method has potential to be used for early indication of DR, by visualising the overlap between CAC features of the fundus images.",
"title": ""
},
{
"docid": "07e91583f63660a6b4aa4bb2063bd2b7",
"text": "ScanSAR interferometry is an attractive option for efficient topographic mapping of large areas and for monitoring of large-scale motions. Only ScanSAR interferometry made it possible to map almost the entire landmass of the earth in the 11-day Shuttle Radar Topography Mission. Also the operational satellites RADARSAT and ENVISAT offer ScanSAR imaging modes and thus allow for repeat-pass ScanSAR interferometry. This paper gives a complete description of ScanSAR and burst-mode interferometric signal properties and compares different processing algorithms. The problems addressed are azimuth scanning pattern synchronization, spectral shift filtering in the presence of high squint, Doppler centroid estimation, different phase-preserving ScanSAR processing algorithms, ScanSAR interferogram formation, coregistration, and beam alignment. Interferograms and digital elevation models from RADARSAT ScanSAR Narrow modes are presented. The novel “pack-and-go” algorithm for efficient burst-mode range processing and a new time-variant fast interpolator for interferometric coregistration are introduced.",
"title": ""
},
{
"docid": "8cf10c84e6e389c0c10238477c619175",
"text": "Based on self-determination theory, this study proposes and tests a motivational model of intraindividual changes in teacher burnout (emotional exhaustion, depersonalization, and reduced personal accomplishment). Participants were 806 French-Canadian teachers in public elementary and high schools. Results show that changes in teachers’ perceptions of classroom overload and students’ disruptive behavior are negatively related to changes in autonomous motivation, which in turn negatively predict changes in emotional exhaustion. Results also indicate that changes in teachers’ perceptions of students’ disruptive behaviors and school principal’s leadership behaviors are related to changes in self-efficacy, which in turn negatively predict changes in three burnout components. 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "98e9d8fb4a04ad141b3a196fe0a9c08b",
"text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.",
"title": ""
},
{
"docid": "5507f3199296478abbc6e106943a53ba",
"text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.",
"title": ""
},
{
"docid": "7275ce89ea2f5ab8eb8b6651e2487dcb",
"text": "A major challenge of semantic parsing is the vocabulary mismatch problem between natural language and target ontology. In this paper, we propose a sentence rewriting based semantic parsing method, which can effectively resolve the mismatch problem by rewriting a sentence into a new form which has the same structure with its target logical form. Specifically, we propose two sentence-rewriting methods for two common types of mismatch: a dictionary-based method for 1N mismatch and a template-based method for N-1 mismatch. We evaluate our sentence rewriting based semantic parser on the benchmark semantic parsing dataset – WEBQUESTIONS. Experimental results show that our system outperforms the base system with a 3.4% gain in F1, and generates logical forms more accurately and parses sentences more robustly.",
"title": ""
},
{
"docid": "f0c9db6cab187463162c8bba71ea011a",
"text": "Traditional Network-on-Chips (NoCs) employ simple arbitration strategies, such as round-robin or oldest-first, to decide which packets should be prioritized in the network. This is counter-intuitive since different packets can have very different effects on system performance due to, e.g., different level of memory-level parallelism (MLP) of applications. Certain packets may be performance-critical because they cause the processor to stall, whereas others may be delayed for a number of cycles with no effect on application-level performance as their latencies are hidden by other outstanding packets'latencies. In this paper, we define slack as a key measure that characterizes the relative importance of a packet. Specifically, the slack of a packet is the number of cycles the packet can be delayed in the network with no effect on execution time. This paper proposes new router prioritization policies that exploit the available slack of interfering packets in order to accelerate performance-critical packets and thus improve overall system performance. When two packets interfere with each other in a router, the packet with the lower slack value is prioritized. We describe mechanisms to estimate slack, prevent starvation, and combine slack-based prioritization with other recently proposed application-aware prioritization mechanisms.\n We evaluate slack-based prioritization policies on a 64-core CMP with an 8x8 mesh NoC using a suite of 35 diverse applications. For a representative set of case studies, our proposed policy increases average system throughput by 21.0% over the commonlyused round-robin policy. Averaged over 56 randomly-generated multiprogrammed workload mixes, the proposed policy improves system throughput by 10.3%, while also reducing application-level unfairness by 30.8%.",
"title": ""
},
{
"docid": "1bbd0eca854737c94e62442ee4cedac8",
"text": "Most convolutional neural networks (CNNs) lack midlevel layers that model semantic parts of objects. This limits CNN-based methods from reaching their full potential in detecting and utilizing small semantic parts in recognition. Introducing such mid-level layers can facilitate the extraction of part-specific features which can be utilized for better recognition performance. This is particularly important in the domain of fine-grained recognition. In this paper, we propose a new CNN architecture that integrates semantic part detection and abstraction (SPDACNN) for fine-grained classification. The proposed network has two sub-networks: one for detection and one for recognition. The detection sub-network has a novel top-down proposal method to generate small semantic part candidates for detection. The classification sub-network introduces novel part layers that extract features from parts detected by the detection sub-network, and combine them for recognition. As a result, the proposed architecture provides an end-to-end network that performs detection, localization of multiple semantic parts, and whole object recognition within one framework that shares the computation of convolutional filters. Our method outperforms state-of-theart methods with a large margin for small parts detection (e.g. our precision of 93.40% vs the best previous precision of 74.00% for detecting the head on CUB-2011). It also compares favorably to the existing state-of-the-art on finegrained classification, e.g. it achieves 85.14% accuracy on CUB-2011.",
"title": ""
},
{
"docid": "f9f54cf8c057d2d9f9b559eb62a94e38",
"text": "The proliferation of malware has presented a serious threat to the security of computer systems. Traditional signature-based anti-virus systems fail to detect polymorphic/metamorphic and new, previously unseen malicious executables. Data mining methods such as Naive Bayes and Decision Tree have been studied on small collections of executables. In this paper, resting on the analysis of Windows APIs called by PE files, we develop the Intelligent Malware Detection System (IMDS) using Objective-Oriented Association (OOA) mining based classification. IMDS is an integrated system consisting of three major modules: PE parser, OOA rule generator, and rule based classifier. An OOA_Fast_FP-Growth algorithm is adapted to efficiently generate OOA rules for classification. A comprehensive experimental study on a large collection of PE files obtained from the anti-virus laboratory of KingSoft Corporation is performed to compare various malware detection approaches. Promising experimental results demonstrate that the accuracy and efficiency of our IMDS system outperform popular anti-virus software such as Norton AntiVirus and McAfee VirusScan, as well as previous data mining based detection systems which employed Naive Bayes, Support Vector Machine (SVM) and Decision Tree techniques. Our system has already been incorporated into the scanning tool of KingSoft’s Anti-Virus software.",
"title": ""
},
{
"docid": "b18f5df68581789312d48c65ba7afb9d",
"text": "In this study, an efficient addressing scheme for radix-4 FFT processor is presented. The proposed method uses extra registers to buffer and reorder the data inputs of the butterfly unit. It avoids the modulo-r addition in the address generation; hence, the critical path is significantly shorter than the conventional radix-4 FFT implementations. A significant property of the proposed method is that the critical path of the address generator is independent from the FFT transform length N, making it extremely efficient for large FFT transforms. For performance evaluation, the new FFT architecture has been implemented by FPGA (Altera Stratix) hardware and also synthesized by CMOS 0.18µm technology. The results confirm the speed and area advantages for large FFTs. Although only radix-4 FFT address generation is presented in the paper, it can be used for higher radix FFT.",
"title": ""
},
{
"docid": "b68e09f879e51aad3ed0ce8b696da957",
"text": "The status of current model-driven engineering technologies has matured over the last years whereas the infrastructure supporting model management is still in its infancy. Infrastructural means include version control systems, which are successfully used for the management of textual artifacts like source code. Unfortunately, they are only limited suitable for models. Consequently, dedicated solutions emerge. These approaches are currently hard to compare, because no common quality measure has been established yet and no structured test cases are available. In this paper, we analyze the challenges coming along with merging different versions of one model and derive a first categorization of typical changes and the therefrom resulting conflicts. On this basis we create a set of test cases on which we apply state-of-the-art versioning systems and report our experiences.",
"title": ""
},
{
"docid": "a4197ab8a70142ac331599c506996bc9",
"text": "This paper presents the findings of two studies that replicate previous work by Fred Davis on the subject of perceived usefulness, ease of use, and usage of information technology. The two studies focus on evaluating the psychometric properties of the ease of use and usefulness scales, while examining the relationship between ease of use, usefulness, and system usage. Study 1 provides a strong assessment of the convergent validity of the two scales by examining heterogeneous user groups dealing with heterogeneous implementations of messaging technology. In addition, because one might expect users to share similar perspectives about voice and electronic mail, the study also represents a strong test of discriminant validity. In this study a total of 118 respondents from 10 different organizations were surveyed for their attitudes toward two messaging technologies: voice and electronic mail. Study 2 complements the approach taken in Study 1 by focusing on the ability to demonstrate discriminant validity. Three popular software applications (WordPerfect, Lotus 1-2-3, and Harvard Graphics) were examined based on the expectation that they would all be rated highly on both scales. In this study a total of 73 users rated the three packages in terms of ease of use and usefulness. The results of the studies demonstrate reliable and valid scales for measurement of perceived ease of use and usefulness. In addition, the paper tests the relationships between ease of use, usefulness, and usage using structural equation modelling. The results of this model are consistent with previous research for Study 1, suggesting that usefulness is an important determinant of system use. For Study 2 the results are somewhat mixed, but indicate the importance of both ease of use and usefulness. Differences in conditions of usage are explored to explain these findings.",
"title": ""
},
{
"docid": "d638bf6a0ec3354dd6ba90df0536aa72",
"text": "Selected elements of dynamical system (DS) theory approach to nonlinear time series analysis are introduced. Key role in this concept plays a method of time delay. The method enables us reconstruct phase space trajectory of DS without knowledge of its governing equations. Our variant is tested and compared with wellknown TISEAN package for Lorenz and Hénon systems. Introduction There are number of methods of nonlinear time series analysis (e.g. nonlinear prediction or noise reduction) that work in a phase space (PS) of dynamical systems. We assume that a given time series of some variable is generated by a dynamical system. A specific state of the system can be represented by a point in the phase space and time evolution of the system creates a trajectory in the phase space. From this point of view we consider our time series to be a projection of trajectory of DS to one (or more – when we have more simultaneously measured variables) coordinates of phase space. This view was enabled due to formulation of embedding theorem [1], [2] at the beginning of the 1980s. It says that it is possible to reconstruct the phase space from the time series. One of the most frequently used methods of phase space reconstruction is the method of time delay. The main task while using this method is to determine values of time delay τ and embedding dimension m. We tested individual steps of this method on simulated data generated by Lorenz and Hénon systems. We compared results computed by our own programs with outputs of program package TISEAN created by R. Hegger, H. Kantz, and T. Schreiber [3]. Method of time delay The most frequently used method of PS reconstruction is the method of time delay. If we have a time series of a scalar variable we construct a vector ( ) , ,..., 1 , N i t x i = in phase space in time ti as following: ( ) ( ) ( ) ( ) ( ) ( ) [ ], 1 ,..., 2 , , τ τ τ − + + + = m t x t x t x t x t i i i i i X where i goes from 1 to N – (m – 1)τ, τ is time delay, m is a dimension of reconstructed space (embedding dimension) and M = N – (m – 1)τ is number of points (states) in the phase space. According to embedding theorem, when this is done in a proper way, dynamics reconstructed using this formula is equivalent to the dynamics on an attractor in the origin phase space in the sense that characteristic invariants of the system are conserved. The time delay method and related aspects are described in literature, e.g. [4]. We estimated the two parameters—time delay and embedding dimension—using algorithms below. Choosing a time delay To determine a suitable time delay we used average mutual information (AMI), a certain generalization of autocorrelation function. Average mutual information between sets of measurements A and B is defined [5]:",
"title": ""
},
{
"docid": "57334078030a2b2d393a7c236d6a3a1c",
"text": "Neural Architecture Search (NAS) aims at finding one “single” architecture that achieves the best accuracy for a given task such as image recognition. In this paper, we study the instance-level variation, and demonstrate that instance-awareness is an important yet currently missing component of NAS. Based on this observation, we propose InstaNAS for searching toward instance-level architectures; the controller is trained to search and form a “distribution of architectures” instead of a single final architecture. Then during the inference phase, the controller selects an architecture from the distribution, tailored for each unseen image to achieve both high accuracy and short latency. The experimental results show that InstaNAS reduces the inference latency without compromising classification accuracy. On average, InstaNAS achieves 48.9% latency reduction on CIFAR-10 and 40.2% latency reduction on CIFAR-100 with respect to MobileNetV2 architecture.",
"title": ""
},
{
"docid": "51f47a5e873f7b24cd15aff4ceb8d35c",
"text": "We introduce the Adaptive Skills, Adaptive Partitions (ASAP) framework that (1) learns skills (i.e., temporally extended actions or options) as well as (2) where to apply them. We believe that both (1) and (2) are necessary for a truly general skill learning framework, which is a key building block needed to scale up to lifelong learning agents. The ASAP framework can also solve related new tasks simply by adapting where it applies its existing learned skills. We prove that ASAP converges to a local optimum under natural conditions. Finally, our experimental results, which include a RoboCup domain, demonstrate the ability of ASAP to learn where to reuse skills as well as solve multiple tasks with considerably less experience than solving each task from scratch.",
"title": ""
},
{
"docid": "148b7445ec2cd811d64fd81c61c20e02",
"text": "Using sensors to measure parameters of interest in rotating environments and communicating the measurements in real-time over wireless links, requires a reliable power source. In this paper, we have investigated the possibility to generate electric power locally by evaluating six different energy-harvesting technologies. The applicability of the technology is evaluated by several parameters that are important to the functionality in an industrial environment. All technologies are individually presented and evaluated, a concluding table is also summarizing the technologies strengths and weaknesses. To support the technology evaluation on a more theoretical level, simulations has been performed to strengthen our claims. Among the evaluated and simulated technologies, we found that the variable reluctance-based harvesting technology is the strongest candidate for further technology development for the considered use-case.",
"title": ""
},
{
"docid": "eab514f5951a9e2d3752002c7ba799d8",
"text": "In industrial fabric productions, automated real time systems are needed to find out the minor defects. It will save the cost by not transporting defected products and also would help in making compmay image of quality fabrics by sending out only undefected products. A real time fabric defect detection system (FDDS), implementd on an embedded DSP platform is presented here. Textural features of fabric image are extracted based on gray level co-occurrence matrix (GLCM). A sliding window technique is used for defect detection where window moves over the whole image computing a textural energy from the GLCM of the fabric image. The energy values are compared to a reference and the deviations beyond a threshold are reported as defects and also visually represented by a window. The implementation is carried out on a TI TMS320DM642 platform and programmed using code composer studio software. The real time output of this implementation was shown on a monitor. KeywordsFabric Defects, Texture, Grey Level Co-occurrence Matrix, DSP Kit, Energy Computation, Sliding Window, FDDS",
"title": ""
},
{
"docid": "5ae1191a27958704ab5f33749c6b30b5",
"text": "Much of Bluetooth’s data remains confidential in practice due to the difficulty of eavesdropping it. We present mechanisms for doing so, therefore eliminating the data confidentiality properties of the protocol. As an additional security measure, devices often operate in “undiscoverable mode” in order to hide their identity and provide access control. We show how the full MAC address of such master devices can be obtained, therefore bypassing the access control of this feature. Our work results in the first open-source Bluetooth sniffer.",
"title": ""
},
{
"docid": "657087aaadc0537e9fb19c422c27b485",
"text": "Swarms of embedded devices provide new challenges for privacy and security. We propose Permissioned Blockchains as an effective way to secure and manage these systems of systems. A long view of blockchain technology yields several requirements absent in extant blockchain implementations. Our approach to Permissioned Blockchains meets the fundamental requirements for longevity, agility, and incremental adoption. Distributed Identity Management is an inherent feature of our Permissioned Blockchain and provides for resilient user and device identity and attribute management.",
"title": ""
}
] | scidocsrr |
cfa5df626c7295941eb72c22ff6b61cf | Fast and robust face recognition via coding residual map learning based adaptive masking | [
{
"docid": "da416ce58897f6f86d9cd7b0de422508",
"text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.",
"title": ""
},
{
"docid": "7fdb4e14a038b11bb0e92917d1e7ce70",
"text": "Recently the sparse representation (or coding) based classification (SRC) has been successfully used in face recognition. In SRC, the testing image is represented as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1-norm of coding residual. Such a sparse coding model actually assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be accurate enough to describe the coding errors in practice. In this paper, we propose a new scheme, namely the robust sparse coding (RSC), by modeling the sparse coding as a sparsity-constrained robust regression problem. The RSC seeks for the MLE (maximum likelihood estimation) solution of the sparse coding problem, and it is much more robust to outliers (e.g., occlusions, corruptions, etc.) than SRC. An efficient iteratively reweighted sparse coding algorithm is proposed to solve the RSC model. Extensive experiments on representative face databases demonstrate that the RSC scheme is much more effective than state-of-the-art methods in dealing with face occlusion, corruption, lighting and expression changes, etc.",
"title": ""
},
{
"docid": "7655df3f32e6cf7a5545ae2231f71e7c",
"text": "Many problems in information processing involve some form of dimensionality reduction. In this thesis, we introduce Locality Preserving Projections (LPP). These are linear projective maps that arise by solving a variational problem that optimally preserves the neighborhood structure of the data set. LPP should be seen as an alternative to Principal Component Analysis (PCA) – a classical linear technique that projects the data along the directions of maximal variance. When the high dimensional data lies on a low dimensional manifold embedded in the ambient space, the Locality Preserving Projections are obtained by finding the optimal linear approximations to the eigenfunctions of the Laplace Beltrami operator on the manifold. As a result, LPP shares many of the data representation properties of nonlinear techniques such as Laplacian Eigenmaps or Locally Linear Embedding. Yet LPP is linear and more crucially is defined everywhere in ambient space rather than just on the training data points. Theoretical analysis shows that PCA, LPP, and Linear Discriminant Analysis (LDA) can be obtained from different graph models. Central to this is a graph structure that is inferred on the data points. LPP finds a projection that respects this graph structure. We have applied our algorithms to several real world applications, e.g. face analysis and document representation.",
"title": ""
}
] | [
{
"docid": "9cb682049f4a4d1291189b7cfccafb1e",
"text": "The sequencing by hybridization (SBH) of determining the order in which nucleotides should occur on a DNA string is still under discussion for enhancements on computational intelligence although the next generation of DNA sequencing has come into existence. In the last decade, many works related to graph theory-based DNA sequencing have been carried out in the literature. This paper proposes a method for SBH by integrating hypergraph with genetic algorithm (HGGA) for designing a novel analytic technique to obtain DNA sequence from its spectrum. The paper represents elements of the spectrum and its relation as hypergraph and applies the unimodular property to ensure the compatibility of relations between l-mers. The hypergraph representation and unimodular property are bound with the genetic algorithm that has been customized with a novel selection and crossover operator reducing the computational complexity with accelerated convergence. Subsequently, upon determining the primary strand, an anti-homomorphism is invoked to find the reverse complement of the sequence. The proposed algorithm is implemented in the GenBank BioServer datasets, and the results are found to prove the efficiency of the algorithm. The HGGA is a non-classical algorithm with significant advantages and computationally attractive complexity reductions ranging to $$O(n^{2} )$$ O ( n 2 ) with improved accuracy that makes it prominent for applications other than DNA sequencing like image processing, task scheduling and big data processing.",
"title": ""
},
{
"docid": "b3962fd4000fced796f3764d009c929e",
"text": "Low-field extremity magnetic resonance imaging (lfMRI) is currently commercially available and has been used clinically to evaluate rheumatoid arthritis (RA). However, one disadvantage of this new modality is that the field of view (FOV) is too small to assess hand and wrist joints simultaneously. Thus, we have developed a new lfMRI system, compacTscan, with a FOV that is large enough to simultaneously assess the entire wrist to proximal interphalangeal joint area. In this work, we examined its clinical value compared to conventional 1.5 tesla (T) MRI. The comparison involved evaluating three RA patients by both 0.3 T compacTscan and 1.5 T MRI on the same day. Bone erosion, bone edema, and synovitis were estimated by our new compact MRI scoring system (cMRIS) and the kappa coefficient was calculated on a joint-by-joint basis. We evaluated a total of 69 regions. Bone erosion was detected in 49 regions by compacTscan and in 48 regions by 1.5 T MRI, while the total erosion score was 77 for compacTscan and 76.5 for 1.5 T MRI. These findings point to excellent agreement between the two techniques (kappa = 0.833). Bone edema was detected in 14 regions by compacTscan and in 19 by 1.5 T MRI, and the total edema score was 36.25 by compacTscan and 47.5 by 1.5 T MRI. Pseudo-negative findings were noted in 5 regions. However, there was still good agreement between the techniques (kappa = 0.640). Total number of evaluated joints was 33. Synovitis was detected in 13 joints by compacTscan and 14 joints by 1.5 T MRI, while the total synovitis score was 30 by compacTscan and 32 by 1.5 T MRI. Thus, although 1 pseudo-positive and 2 pseudo-negative findings resulted from the joint evaluations, there was again excellent agreement between the techniques (kappa = 0.827). Overall, the data obtained by our compacTscan system showed high agreement with those obtained by conventional 1.5 T MRI with regard to diagnosis and the scoring of bone erosion, edema, and synovitis. We conclude that compacTscan is useful for diagnosis and estimation of disease activity in patients with RA.",
"title": ""
},
{
"docid": "3a52576a2fdaa7f6f9632dc8c4bf0971",
"text": "As known, fractional CO2 resurfacing treatments are more effective than non-ablative ones against aging signs, but post-operative redness and swelling prolong the overall downtime requiring up to steroid administration in order to reduce these local systems. In the last years, an increasing interest has been focused on the possible use of probiotics for treating inflammatory and allergic conditions suggesting that they can exert profound beneficial effects on skin homeostasis. In this work, the Authors report their experience on fractional CO2 laser resurfacing and provide the results of a new post-operative topical treatment with an experimental cream containing probiotic-derived active principles potentially able to modulate the inflammatory reaction associated to laser-treatment. The cream containing DermaACB (CERABEST™) was administered post-operatively to 42 consecutive patients who were treated with fractional CO2 laser. All patients adopted the cream twice a day for 2 weeks. Grades were given according to outcome scale. The efficacy of the cream containing DermaACB was evaluated comparing the rate of post-operative signs vanishing with a control group of 20 patients topically treated with an antibiotic cream and a hyaluronic acid based cream. Results registered with the experimental treatment were good in 22 patients, moderate in 17, and poor in 3 cases. Patients using the study cream took an average time of 14.3 days for erythema resolution and 9.3 days for swelling vanishing. The post-operative administration of the cream containing DermaACB induces a quicker reduction of post-operative erythema and swelling when compared to a standard treatment.",
"title": ""
},
{
"docid": "b90b7b44971cf93ba343b5dcdd060875",
"text": "This paper discusses a general approach to qualitative modeling based on fuzzy logic. The method of qualitative modeling is divided into two parts: fuzzy modeling and linguistic approximation. It proposes to use a fuzzy clustering method (fuzzy c-means method) to identify the structure of a fuzzy model. To clarify the advantages of the proposed method, it also shows some examples of modeling, among them a model of a dynamical process and a model of a human operator’s control action.",
"title": ""
},
{
"docid": "a8a802b8130d2b6a1b2dae84d53fb7c9",
"text": "This paper addresses an open challenge in educational data mining, i.e., the problem of using observed prerequisite relations among courses to learn a directed universal concept graph, and using the induced graph to predict unobserved prerequisite relations among a broader range of courses. This is particularly useful to induce prerequisite relations among courses from different providers (universities, MOOCs, etc.). We propose a new framework for inference within and across two graphs---at the course level and at the induced concept level---which we call Concept Graph Learning (CGL). In the training phase, our system projects the course-level links onto the concept space to induce directed concept links; in the testing phase, the concept links are used to predict (unobserved) prerequisite links for test-set courses within the same institution or across institutions. The dual mappings enable our system to perform an interlingua-style transfer learning, e.g. treating the concept graph as the interlingua, and inducing prerequisite links in a transferable manner across different universities. Experiments on our newly collected data sets of courses from MIT, Caltech, Princeton and CMU show promising results, including the viability of CGL for transfer learning.",
"title": ""
},
{
"docid": "6fb1f05713db4e771d9c610fa9c9925d",
"text": "Objectives: Straddle injury represents a rare and complex injury to the female genito urinary tract (GUT). Overall prevention would be the ultimate goal, but due to persistent inhomogenity and inconsistency in definitions and guidelines, or suboptimal coding, the optimal study design for a prevention programme is still missing. Thus, medical records data were tested for their potential use for an injury surveillance registry and their impact on future prevention programmes. Design: Retrospective record analysis out of a 3 year period. Setting: All patients were treated exclusively by the first author. Patients: Six girls, median age 7 years, range 3.5 to 12 years with classical straddle injury. Interventions: Medical treatment and recording according to National and International Standards. Main Outcome Measures: All records were analyzed for accuracy in diagnosis and coding, surgical procedure, time and location of incident and examination findings. Results: All registration data sets were complete. A specific code for “straddle injury” in International Classification of Diseases (ICD) did not exist. Coding followed mainly reimbursement issues and specific information about the injury was usually expressed in an individual style. Conclusions: As demonstrated in this pilot, population based medical record data collection can play a substantial part in local injury surveillance registry and prevention initiatives planning.",
"title": ""
},
{
"docid": "7968e0f2960a7dce6017699fd1222e36",
"text": "This work investigates the role of contrasting discourse relations signaled by cue phrases, together with phrase positional information, in predicting sentiment at the phrase level. Two domains of online reviews were chosen. The first domain is of nutritional supplement reviews, which are often poorly structured yet also allow certain simplifying assumptions to be made. The second domain is of hotel reviews, which have somewhat different characteristics. A corpus is built from these reviews, and manually tagged for polarity. We propose and evaluate a few new features that are realized through a lightweight method of discourse analysis, and use these features in a hybrid lexicon and machine learning based classifier. Our results show that these features may be used to obtain an improvement in classification accuracy compared to other traditional machine learning approaches.",
"title": ""
},
{
"docid": "bf164afc6315bf29a07e6026a3db4a26",
"text": "iBeacons are a new way to interact with hardware. An iBeacon is a Bluetooth Low Energy device that only sends a signal in a specific format. They are like a lighthouse that sends light signals to boats. This paper explains what an iBeacon is, how it works and how it can simplify your daily life, what restriction comes with iBeacon and how to improve this restriction., as well as, how to use Location-based Services to track items. E.g., every time you touchdown at an airport and wait for your suitcase at the luggage reclaim, you have no information when your luggage will arrive at the conveyor belt. With an iBeacon inside your suitcase, it is possible to track the luggage and to receive a push notification about it even before you can see it. This is just one possible solution to use them. iBeacon can create a completely new shopping experience or make your home smarter. This paper demonstrates the luggage tracking use case and evaluates its possibilities and restrictions.",
"title": ""
},
{
"docid": "3bff3136e5e2823d0cca2f864fe9e512",
"text": "Cloud computing provides variety of services with the growth of their offerings. Due to efficient services, it faces numerous challenges. It is based on virtualization, which provides users a plethora computing resources by internet without managing any infrastructure of Virtual Machine (VM). With network virtualization, Virtual Machine Manager (VMM) gives isolation among different VMs. But, sometimes the levels of abstraction involved in virtualization have been reducing the workload performance which is also a concern when implementing virtualization to the Cloud computing domain. In this paper, it has been explored how the vendors in cloud environment are using Containers for hosting their applications and also the performance of VM deployments. It also compares VM and Linux Containers with respect to the quality of service, network performance and security evaluation.",
"title": ""
},
{
"docid": "8a4b1c87b85418ce934f16003a481f27",
"text": "Current parking space vacancy detection systems use simple trip sensors at the entry and exit points of parking lots. Unfortunately, this type of system fails when a vehicle takes up more than one spot or when a parking lot has different types of parking spaces. Therefore, I propose a camera-based system that would use computer vision algorithms for detecting vacant parking spaces. My algorithm uses a combination of car feature point detection and color histogram classification to detect vacant parking spaces in static overhead images.",
"title": ""
},
{
"docid": "d2f4159b73f6baf188d49c43e6215262",
"text": "In this paper, we compare the performance of descriptors computed for local interest regions, as, for example, extracted by the Harris-Affine detector [Mikolajczyk, K and Schmid, C, 2004]. Many different descriptors have been proposed in the literature. It is unclear which descriptors are more appropriate and how their performance depends on the interest region detector. The descriptors should be distinctive and at the same time robust to changes in viewing conditions as well as to errors of the detector. Our evaluation uses as criterion recall with respect to precision and is carried out for different image transformations. We compare shape context [Belongie, S, et al., April 2002], steerable filters [Freeman, W and Adelson, E, Setp. 1991], PCA-SIFT [Ke, Y and Sukthankar, R, 2004], differential invariants [Koenderink, J and van Doorn, A, 1987], spin images [Lazebnik, S, et al., 2003], SIFT [Lowe, D. G., 1999], complex filters [Schaffalitzky, F and Zisserman, A, 2002], moment invariants [Van Gool, L, et al., 1996], and cross-correlation for different types of interest regions. We also propose an extension of the SIFT descriptor and show that it outperforms the original method. Furthermore, we observe that the ranking of the descriptors is mostly independent of the interest region detector and that the SIFT-based descriptors perform best. Moments and steerable filters show the best performance among the low dimensional descriptors.",
"title": ""
},
{
"docid": "4033a48235fc21987549bdc0ca1a893c",
"text": "A novel algorithm for vehicle safety distance between driving cars for vehicle safety warning system is presented in this paper. The presented system concept includes a distance obstacle detection and safety distance calculation. The system detects the distance between the car and the in front of vehicles (obstacles) and uses the vehicle speed and other parameters to calculate the braking safety distance of the moving car. The system compares the obstacle distance and braking safety distance which are used to determine the moving vehicle's safety distance is enough or not. This paper focuses on the solution algorithm presentation.",
"title": ""
},
{
"docid": "44dbbc80c05cbbd95bacdf2f0a724db2",
"text": "Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.",
"title": ""
},
{
"docid": "4d403184b8f482449130bbb0ee1fb2cf",
"text": "Finite element analysis A 2D finite element analysis for the numerical prediction of capacity curve of unreinforced masonry (URM) walls is conducted. The studied model is based on the fiber finite element approach. The emphasis of this paper will be on the errors obtained from fiber finite element analysis of URM structures under pushover analysis. The masonry material is modeled by different constitutive stress-strain model in compression and tension. OpenSees software is employed to analysis the URM walls. Comparison of numerical predictions with experimental data, it is shown that the fiber model employed in OpenSees cannot properly predict the behavior of URM walls with balance between accuracy and low computational efforts. Additionally, the finite element analyses results show appropriate predictions of some experimental data when the real tensile strength of masonry material is changed. Hence, from the viewpoint of this result, it is concluded that obtained results from fiber finite element analyses employed in OpenSees are unreliable because the exact behavior of masonry material is different from the adopted masonry material models used in modeling process.",
"title": ""
},
{
"docid": "3f2081f9c1cf10e9ec27b2541f828320",
"text": "As the heart of an aircraft, the aircraft engine's condition directly affects the safety, reliability, and operation of the aircraft. Prognostics and health management for aircraft engines can provide advance warning of failure and estimate the remaining useful life. However, aircraft engine systems are complex with both intangible and uncertain factors, it is difficult to model the complex degradation process, and no single prognostic approach can effectively solve this critical and complicated problem. Thus, fusion prognostics is conducted to obtain more accurate prognostics results. In this paper, a prognostics and health management-oriented integrated fusion prognostic framework is developed to improve the system state forecasting accuracy. This framework strategically fuses the monitoring sensor data and integrates the strengths of the data-driven prognostics approach and the experience-based approach while reducing their respective limitations. As an application example, this developed fusion prognostics framework is employed to predict the remaining useful life of an aircraft gas turbine engine based on sensor data. The results demonstrate that the proposed fusion prognostics framework is an effective prognostics tool, which can provide a more accurate and robust remaining useful life estimation than any single prognostics method.",
"title": ""
},
{
"docid": "572ae23dd73dfb0a7cbc04d05772528f",
"text": "Machine learning models with very low test error have been shown to be consistently vulnerable to small, adversarially chosen perturbations of the input. We hypothesize that this counterintuitive behavior is a result of the high-dimensional geometry of the data manifold, and explore this hypothesis on a simple highdimensional dataset. For this dataset we show a fundamental bound relating the classification error rate to the average distance to the nearest misclassification, which is independent of the model. We train different neural network architectures on this dataset and show their error sets approach this theoretical bound. As a result of the theory, the vulnerability of machine learning models to small adversarial perturbations is a logical consequence of the amount of test error observed. We hope that our theoretical analysis of this foundational synthetic case will point a way forward to explore how the geometry of complex real-world data sets leads to adversarial examples.",
"title": ""
},
{
"docid": "f9d4b66f395ec6660da8cb22b96c436c",
"text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.",
"title": ""
},
{
"docid": "ec5ebfbe28daebaaac23fbf031b75ab3",
"text": "Theoretical models predict that overcondent investors trade excessively. We test this prediction by partitioning investors on gender. Psychological research demonstrates that, in areas such as nance, men are more overcondent than women. Thus, theory predicts that men will trade more excessively than women. Using account data for over 35,000 households from a large discount brokerage, we analyze the common stock investments of men and women from February 1991 through January 1997. We document that men trade 45 percent more than women. Trading reduces men’s net returns by 2.65 percentage points a year as opposed to 1.72 percentage points for women.",
"title": ""
},
{
"docid": "699c6a7b4f938d6a45d65878f08335e4",
"text": "Fuzzing is a popular dynamic program analysis technique used to find vulnerabilities in complex software. Fuzzing involves presenting a target program with crafted malicious input designed to cause crashes, buffer overflows, memory errors, and exceptions. Crafting malicious inputs in an efficient manner is a difficult open problem and often the best approach to generating such inputs is through applying uniform random mutations to pre-existing valid inputs (seed files). We present a learning technique that uses neural networks to learn patterns in the input files from past fuzzing explorations to guide future fuzzing explorations. In particular, the neural models learn a function to predict good (and bad) locations in input files to perform fuzzing mutations based on the past mutations and corresponding code coverage information. We implement several neural models including LSTMs and sequence-to-sequence models that can encode variable length input files. We incorporate our models in the state-of-the-art AFL (American Fuzzy Lop) fuzzer and show significant improvements in terms of code coverage, unique code paths, and crashes for various input formats including ELF, PNG, PDF, and XML.",
"title": ""
},
{
"docid": "7c2cb105e5fad90c90aea0e59aae5082",
"text": "Life often presents us with situations in which it is important to assess the “true” qualities of a person or object, but in which some factor(s) might have affected (or might yet affect) our initial perceptions in an undesired way. For example, in the Reginald Denny case following the 1993 Los Angeles riots, jurors were asked to determine the guilt or innocence of two African-American defendants who were charged with violently assaulting a Caucasion truck driver. Some of the jurors in this case might have been likely to realize that in their culture many of the popular media portrayals of African-Americans are violent in nature. Yet, these jurors ideally would not want those portrayals to influence their perceptions of the particular defendants in the case. In fact, the justice system is based on the assumption that such portrayals will not influence jury verdicts. In our work on bias correction, we have been struck by the variety of potentially biasing factors that can be identified-including situational influences such as media, social norms, and general culture, and personal influences such as transient mood states, motives (e.g., to manage impressions or agree with liked others), and salient beliefs-and we have been impressed by the apparent ubiquity of correction phenomena (which appear to span many areas of psychological inquiry). Yet, systematic investigations of bias correction are in their early stages. Although various researchers have discussed the notion of effortful cognitive processes overcoming initial (sometimes “automatic”) biases in a variety of settings (e.g., Brewer, 1988; Chaiken, Liberman, & Eagly, 1989; Devine, 1989; Kruglanski & Freund, 1983; Neuberg & Fiske, 1987; Petty & Cacioppo, 1986), little attention has been given, until recently, to the specific processes by which biases are overcome when effort is targeted toward “correction of bias.” That is, when",
"title": ""
}
] | scidocsrr |
8b0fb060f28dee6142e3ee5ff28c5578 | Community Detection in Multi-Dimensional Networks | [
{
"docid": "bb2504b2275a20010c0d5f9050173d40",
"text": "Clustering nodes in a graph is a useful general technique in data mining of large network data sets. In this context, Newman and Girvan [9] recently proposed an objective function for graph clustering called the Q function which allows automatic selection of the number of clusters. Empirically, higher values of the Q function have been shown to correlate well with good graph clusterings. In this paper we show how optimizing the Q function can be reformulated as a spectral relaxation problem and propose two new spectral clustering algorithms that seek to maximize Q. Experimental results indicate that the new algorithms are efficient and effective at finding both good clusterings and the appropriate number of clusters across a variety of real-world graph data sets. In addition, the spectral algorithms are much faster for large sparse graphs, scaling roughly linearly with the number of nodes n in the graph, compared to O(n) for previous clustering algorithms using the Q function.",
"title": ""
},
{
"docid": "31873424960073962d3d8eba151f6a4b",
"text": "Multiple view data, which have multiple representations from different feature spaces or graph spaces, arise in various data mining applications such as information retrieval, bioinformatics and social network analysis. Since different representations could have very different statistical properties, how to learn a consensus pattern from multiple representations is a challenging problem. In this paper, we propose a general model for multiple view unsupervised learning. The proposed model introduces the concept of mapping function to make the different patterns from different pattern spaces comparable and hence an optimal pattern can be learned from the multiple patterns of multiple representations. Under this model, we formulate two specific models for two important cases of unsupervised learning, clustering and spectral dimensionality reduction; we derive an iterating algorithm for multiple view clustering, and a simple algorithm providing a global optimum to multiple spectral dimensionality reduction. We also extend the proposed model and algorithms to evolutionary clustering and unsupervised learning with side information. Empirical evaluations on both synthetic and real data sets demonstrate the effectiveness of the proposed model and algorithms.",
"title": ""
}
] | [
{
"docid": "5441d081eabb4ad3d96775183e603b65",
"text": "We give an introduction to computation and logic tailored for algebraists, and use this as a springboard to discuss geometric models of computation and the role of cut-elimination in these models, following Girard's geometry of interaction program. We discuss how to represent programs in the λ-calculus and proofs in linear logic as linear maps between infinite-dimensional vector spaces. The interesting part of this vector space semantics is based on the cofree cocommutative coalgebra of Sweedler [71] and the recent explicit computations of liftings in [62].",
"title": ""
},
{
"docid": "2c28d01814e0732e59d493f0ea2eafcb",
"text": "Victor Frankenstein sought to create an intelligent being imbued with the r ules of civilized human conduct, who could further learn how to behave and possibly even evolve through successive g nerations into a more perfect form. Modern human composers similarly strive to create intell igent algorithmic music composition systems that can follow prespecified rules, learn appropriate patte rns from a collection of melodies, or evolve to produce output more perfectly matched to some aesthetic criteria . H re we review recent efforts aimed at each of these three types of algorithmic composition. We focus pa rticularly on evolutionary methods, and indicate how monstrous many of the results have been. We present a ne w method that uses coevolution to create linked artificial music critics and music composers , and describe how this method can attach the separate parts of rules, learning, and evolution together in to one coherent body. “Invention, it must be humbly admitted, does not consist in creating out of void, but ou t of chaos; the materials must, in the first place, be afforded...” --Mary Shelley, Frankenstein (1831/1993, p. 299)",
"title": ""
},
{
"docid": "b21ae248eea30b91e41012ab70cb6d81",
"text": "Communication technology plays an increasingly important role in the growing automated metering infrastructure (AMI) market. This paper presents a thorough analysis and comparison of four application layer protocols in the smart metering context. The inspected protocols are DLMS/COSEM, the Smart Message Language (SML), and the MMS and SOAP mappings of IEC 61850. The focus of this paper is on their use over TCP/IP. The protocols are first compared with respect to qualitative criteria such as the ability to transmit clock synchronization information. Afterwards the message size of meter reading requests and responses and the different binary encodings of the protocols are compared.",
"title": ""
},
{
"docid": "ce5c5d0d0cb988c96f0363cfeb9610d4",
"text": "Due to deep automation, the configuration of many Cloud infrastructures is static and homogeneous, which, while easing administration, significantly decreases a potential attacker's uncertainty on a deployed Cloud-based service and hence increases the chance of the service being compromised. Moving-target defense (MTD) is a promising solution to the configuration staticity and homogeneity problem. This paper presents our findings on whether and to what extent MTD is effective in protecting a Cloud-based service with heterogeneous and dynamic attack surfaces - these attributes, which match the reality of current Cloud infrastructures, have not been investigated together in previous works on MTD in general network settings. We 1) formulate a Cloud-based service security model that incorporates Cloud-specific features such as VM migration/snapshotting and the diversity/compatibility of migration, 2) consider the accumulative effect of the attacker's intelligence on the target service's attack surface, 3) model the heterogeneity and dynamics of the service's attack surfaces, as defined by the (dynamic) probability of the service being compromised, as an S-shaped generalized logistic function, and 4) propose a probabilistic MTD service deployment strategy that exploits the dynamics and heterogeneity of attack surfaces for protecting the service against attackers. Through simulation, we identify the conditions and extent of the proposed MTD strategy's effectiveness in protecting Cloud-based services. Namely, 1) MTD is more effective when the service deployment is dense in the replacement pool and/or when the attack is strong, and 2) attack-surface heterogeneity-and-dynamics awareness helps in improving MTD's effectiveness.",
"title": ""
},
{
"docid": "348702d85126ed64ca24bdc62c1146d9",
"text": "Autonomous Vehicles are currently being tested in a variety of scenarios. As we move towards Autonomous Vehicles, how should intersections look? To answer that question, we break down an intersection management into the different conundrums and scenarios involved in the trajectory planning and current approaches to solve them. Then, a brief analysis of current works in autonomous intersection is conducted. With a critical eye, we try to delve into the discrepancies of existing solutions while presenting some critical and important factors that have been addressed. Furthermore, open issues that have to be addressed are also emphasized. We also try to answer the question of how to benchmark intersection management algorithms by providing some factors that impact autonomous navigation at intersection.",
"title": ""
},
{
"docid": "4bddc7bb7088c01dbc48504656b0f8d4",
"text": "The basic knowledge required to do sentiment analysis of Twitter is discussed in this review paper. Sentiment Analysis can be viewed as field of text mining, natural language processing. Thus we can study sentiment analysis in various aspects. This paper presents levels of sentiment analysis, approaches to do sentiment analysis, methodologies for doing it, and features to be extracted from text and the applications. Twitter is a microblogging service to which if sentiment analysis done one has to follow explicit path. Thus this paper puts overview about tweets extraction, their preprocessing and their sentiment analysis.",
"title": ""
},
{
"docid": "d848a684aeddd5447f17282fdd2efaf0",
"text": "..........................................................................................................iii ACKNOWLEDGMENTS.........................................................................................iv TABLE OF CONTENTS .........................................................................................vi LIST OF TABLES................................................................................................viii LIST OF FIGURES ................................................................................................ix",
"title": ""
},
{
"docid": "b4d7a8b6b24c85af9f62105194087535",
"text": "New technologies provide expanded opportunities for interaction design. The growing number of possible ways to interact, in turn, creates a new responsibility for designers: Besides the product's visual aesthetics, one has to make choices about the aesthetics of interaction. This issue recently gained interest in Human-Computer Interaction (HCI) research. Based on a review of 19 approaches, we provide an overview of today's state of the art. We focused on approaches that feature \"qualities\", \"dimensions\" or \"parameters\" to describe interaction. Those fell into two broad categories. One group of approaches dealt with detailed spatio-temporal attributes of interaction sequences (i.e., action-reaction) on a sensomotoric level (i.e., form). The other group addressed the feelings and meanings an interaction is enveloped in rather than the interaction itself (i.e., experience). Surprisingly, only two approaches addressed both levels simultaneously, making the explicit link between form and experience. We discuss these findings and its implications for future theory building.",
"title": ""
},
{
"docid": "33ad325fc91be339c580581107314146",
"text": "Designing technological systems for personalized education is an iterative and interdisciplinary process that demands a deep understanding of the application domain, the limitations of current methods and technologies, and the computational methods and complexities behind user modeling and adaptation. We present our design process and the Socially Assistive Robot (SAR) tutoring system to support the efforts of educators in teaching number concepts to preschool children. We focus on the computational considerations of designing a SAR system for young children that may later be personalized along multiple dimensions. We conducted an initial data collection to validate that the system is at the proper challenge level for our target population, and discovered promising patterns in participants' learning styles, nonverbal behavior, and performance. We discuss our plans to leverage the data collected to learn and validate a computational, multidimensional model of number concepts learning.",
"title": ""
},
{
"docid": "f25b9147e67bd8051852142ebd82cf20",
"text": "Fossil fuels currently supply most of the world's energy needs, and however unacceptable their long-term consequences, the supplies are likely to remain adequate for the next few generations. Scientists and policy makers must make use of this period of grace to assess alternative sources of energy and determine what is scientifically possible, environmentally acceptable and technologically promising.",
"title": ""
},
{
"docid": "a08697b03ca0b8b8ea6e037fdccb8645",
"text": "Most P2P systems that provide a DHT abstraction distribute objects among “peer nodes” by choosing random identifiers for the objects. This could result in an O(log N) imbalance. Besides, P2P systems can be highly heterogeneous, i.e. they may consist of peers that range from old desktops behind modem lines to powerful servers connected to the Internet through high-bandwidth lines. In this paper, we address the problem of load balancing in such P2P systems. We explore the space of designing load-balancing algorithms that uses the notion of “virtual servers”. We present three schemes that differ primarily in the amount of information used to decide how to re-arrange load. Our simulation results show that even the simplest scheme is able to balance the load within 80% of the optimal value, while the most complex scheme is able to balance the load within 95% of the optimal value.",
"title": ""
},
{
"docid": "db83931d7fef8174acdb3a1f4ef0d043",
"text": "Physical fatigue has been identified as a risk factor associated with the onset of occupational injury. Muscular fatigue developed from repetitive hand-gripping tasks is of particular concern. This study examined the use of a maximal, repetitive, static power grip test of strength-endurance in detecting differences in exertions between workers with uninjured and injured hands, and workers who were asked to provide insincere exertions. The main dependent variable of interest was power grip muscular force measured with a force strain gauge. Group data showed that the power grip protocol, used in this study, provided a valid and reliable estimate of wrist-hand strength-endurance. Force fatigue curves showed both linear and curvilinear effects among the study groups. An endurance index based on force decrement during repetitive power grip was shown to differentiate between uninjured, injured, and insincere groups.",
"title": ""
},
{
"docid": "0f969ca56c984eb573a541318884fdaa",
"text": "One of the mechanisms by which the innate immune system senses the invasion of pathogenic microorganisms is through the Toll-like receptors (TLRs), which recognize specific molecular patterns that are present in microbial components. Stimulation of different TLRs induces distinct patterns of gene expression, which not only leads to the activation of innate immunity but also instructs the development of antigen-specific acquired immunity. Here, we review the rapid progress that has recently improved our understanding of the molecular mechanisms that mediate TLR signalling.",
"title": ""
},
{
"docid": "b9261a0d56a6305602ff27da5ec160e8",
"text": "In psychology the Rubber Hand Illusion (RHI) is an experiment where participants get the feeling that a fake hand is becoming their own. Recently, new testing methods using an action based paradigm have induced stronger RHI. However, these experiments are facing limitations because they are difficult to implement and lack of rigorous experimental conditions. This paper proposes a low-cost open source robotic hand which is easy to manufacture and removes these limitations. This device reproduces fingers movement of the participants in real time. A glove containing sensors is worn by the participant and records fingers flexion. Then a microcontroller drives hobby servo-motors on the robotic hand to reproduce the corresponding fingers position. A connection between the robotic device and a computer can be established, enabling the experimenters to tune precisely the desired parameters using Matlab. Since this is the first time a robotic hand is developed for the RHI, a validation study has been conducted. This study confirms previous results found in the literature. This study also illustrates the fact that the robotic hand can be used to conduct innovative experiments in the RHI field. Understanding such RHI is important because it can provide guidelines for prosthetic design.",
"title": ""
},
{
"docid": "60a6c8588c46fa2aa63a3348723f2bb1",
"text": "An early warning system can help to identify at-risk students, or predict student learning performance by analyzing learning portfolios recorded in a learning management system (LMS). Although previous studies have shown the applicability of determining learner behaviors from an LMS, most investigated datasets are not assembled from online learning courses or from whole learning activities undertaken on courses that can be analyzed to evaluate students’ academic achievement. Previous studies generally focus on the construction of predictors for learner performance evaluation after a course has ended, and neglect the practical value of an ‘‘early warning’’ system to predict at-risk students while a course is in progress. We collected the complete learning activities of an online undergraduate course and applied data-mining techniques to develop an early warning system. Our results showed that, timedependent variables extracted from LMS are critical factors for online learning. After students have used an LMS for a period of time, our early warning system effectively characterizes their current learning performance. Data-mining techniques are useful in the construction of early warning systems; based on our experimental results, classification and regression tree (CART), supplemented by AdaBoost is the best classifier for the evaluation of learning performance investigated by this study. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "394c8f7a708d69ca26ab0617ab1530ab",
"text": "Developing wireless sensor networks can enable information gathering, information processing and reliable monitoring of a variety of environments for both civil and military applications. It is however necessary to agree upon a basic architecture for building sensor network applications. This paper presents a general classification of sensor network applications based on their network configurations and discusses some of their architectural requirements. We propose a generic architecture for a specific subclass of sensor applications which we define as self-configurable systems where a large number of sensors coordinate amongst themselves to achieve a large sensing task. Throughout this paper we assume a certain subset of the sensors to be immobile. This paper lists the general architectural and infra-structural components necessary for building this class of sensor applications. Given the various architectural components, we present an algorithm that self-organizes the sensors into a network in a transparent manner. Some of the basic goals of our algorithm include minimizing power utilization, localizing operations and tolerating node and link failures.",
"title": ""
},
{
"docid": "38e95632ff481471ddf38c12044257df",
"text": "Retrieving object instances among cluttered scenes efficiently requires compact yet comprehensive regional image representations. Intuitively, object semantics can help build the index that focuses on the most relevant regions. However, due to the lack of bounding-box datasets for objects of interest among retrieval benchmarks, most recent work on regional representations has focused on either uniform or class-agnostic region selection. In this paper, we first fill the void by providing a new dataset of landmark bounding boxes, based on the Google Landmarks dataset, that includes 94k images with manually curated boxes from 15k unique landmarks. Then, we demonstrate how a trained landmark detector, using our new dataset, can be leveraged to index image regions and improve retrieval accuracy while being much more efficient than existing regional methods. In addition, we further introduce a novel regional aggregated selective match kernel (R-ASMK) to effectively combine information from detected regions into an improved holistic image representation. R-ASMK boosts image retrieval accuracy substantially at no additional memory cost, while even outperforming systems that index image regions independently. Our complete image retrieval system improves upon the previous state-of-the-art by significant margins on the Revisited Oxford and Paris datasets. Code and data will be released.",
"title": ""
},
{
"docid": "eb0e38817ff491fbe274caf5e7126d2d",
"text": "At the forefront of debates on language are new data demonstrating infants' early acquisition of information about their native language. The data show that infants perceptually \"map\" critical aspects of ambient language in the first year of life before they can speak. Statistical properties of speech are picked up through exposure to ambient language. Moreover, linguistic experience alters infants' perception of speech, warping perception in the service of language. Infants' strategies are unexpected and unpredicted by historical views. A new theoretical position has emerged, and six postulates of this position are described.",
"title": ""
},
{
"docid": "1938d1b72bbeec9cb9c2eed3f2c0a19a",
"text": "Domain Name System (DNS) traffic has become a rich source of information from a security perspective. However, the volume of DNS traffic has been skyrocketing, such that security analyzers experience difficulties in collecting, retrieving, and analyzing the DNS traffic in response to modern Internet threats. More precisely, much of the research relating to DNS has been negatively affected by the dramatic increase in the number of queries and domains. This phenomenon has necessitated a scalable approach, which is not dependent on the volume of DNS traffic. In this paper, we introduce a fast and scalable approach, called PsyBoG, for detecting malicious behavior within large volumes of DNS traffic. PsyBoG leverages a signal processing technique, power spectral density (PSD) analysis, to discover the major frequencies resulting from the periodic DNS queries of botnets. The PSD analysis allows us to detect sophisticated botnets regardless of their evasive techniques, sporadic behavior, and even normal users’ traffic. Furthermore, our method allows us to deal with large-scale DNS data by only utilizing the timing information of query generation regardless of the number of queries and domains. Finally, PsyBoG discovers groups of hosts which show similar patterns of malicious behavior. PsyBoG was evaluated by conducting experiments with two different data sets, namely DNS traces generated by real malware in controlled environments and a large number of real-world DNS traces collected from a recursive DNS server, an authoritative DNS server, and Top-Level Domain (TLD) servers. We utilized the malware traces as the ground truth, and, as a result, PsyBoG performed with a detection accuracy of 95%. By using a large number of DNS traces, we were able to demonstrate the scalability and effectiveness of PsyBoG in terms of practical usage. Finally, PsyBoG detected 23 unknown and 26 known botnet groups with 0.1% false positives. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2a422c6047bca5a997d5c3d0ee080437",
"text": "Connecting mathematical logic and computation, it ensures that some aspects of programming are absolute.",
"title": ""
}
] | scidocsrr |
a2fb0018d07bcf972886b10cc66ce964 | Recurrent Neural Networks for Customer Purchase Prediction on Twitter | [
{
"docid": "e2c6437d257559211d182b5707aca1a4",
"text": "In present times, social forums such as Quora and Yahoo! Answers constitute powerful media through which people discuss on a variety of topics and express their intentions and thoughts. Here they often reveal their potential intent to purchase ‘Purchase Intent’ (PI). A purchase intent is defined as a text expression showing a desire to purchase a product or a service in future. Extracting posts having PI from a user’s social posts gives huge opportunities towards web personalization, targeted marketing and improving community observing systems. In this paper, we explore the novel problem of detecting PIs from social posts and classifying them. We find that using linguistic features along with statistical features of PI expressions achieves a significant improvement in PI classification over ‘bag-ofwords’ based features used in many present day socialmedia classification tasks. Our approach takes into consideration the specifics of social posts like limited contextual information, incorrect grammar, language ambiguities, etc. by extracting features at two different levels of text granularity word and phrase based features and grammatical dependency based features. Apart from these, the patterns observed in PI posts help us to identify some specific features.",
"title": ""
},
{
"docid": "cf2fc7338a0a81e4c56440ec7c3c868e",
"text": "We describe a new dependency parser for English tweets, TWEEBOPARSER. The parser builds on several contributions: new syntactic annotations for a corpus of tweets (TWEEBANK), with conventions informed by the domain; adaptations to a statistical parsing algorithm; and a new approach to exploiting out-of-domain Penn Treebank data. Our experiments show that the parser achieves over 80% unlabeled attachment accuracy on our new, high-quality test set and measure the benefit of our contributions. Our dataset and parser can be found at http://www.ark.cs.cmu.edu/TweetNLP.",
"title": ""
},
{
"docid": "64330f538b3d8914cbfe37565ab0d648",
"text": "The compositionality of meaning extends beyond the single sentence. Just as words combine to form the meaning of sentences, so do sentences combine to form the meaning of paragraphs, dialogues and general discourse. We introduce both a sentence model and a discourse model corresponding to the two levels of compositionality. The sentence model adopts convolution as the central operation for composing semantic vectors and is based on a novel hierarchical convolutional neural network. The discourse model extends the sentence model and is based on a recurrent neural network that is conditioned in a novel way both on the current sentence and on the current speaker. The discourse model is able to capture both the sequentiality of sentences and the interaction between different speakers. Without feature engineering or pretraining and with simple greedy decoding, the discourse model coupled to the sentence model obtains state of the art performance on a dialogue act classification experiment.",
"title": ""
}
] | [
{
"docid": "6d9735b19ab2cb1251bd294045145367",
"text": "Waveguide twists are often necessary to provide polarization rotation between waveguide-based components. At terahertz frequencies, it is desirable to use a twist design that is compact in order to reduce loss; however, these designs are difficult if not impossible to realize using standard machining. This paper presents a micromachined compact waveguide twist for terahertz frequencies. The Rud-Kirilenko twist geometry is ideally suited to the micromachining processes developed at the University of Virginia. Measurements of a WR-1.5 micromachined twist exhibit a return loss near 20 dB and a median insertion loss of 0.5 dB from 600 to 750 GHz.",
"title": ""
},
{
"docid": "b3801b9d9548c49c79eacef4c71e84ad",
"text": "Identifying that a given binary program implements a specific cryptographic algorithm and finding out more information about the cryptographic code is an important problem. Proprietary programs and especially malicious software (so called malware) often use cryptography and we want to learn more about the context, e.g., which algorithms and keys are used by the program. This helps an analyst to quickly understand what a given binary program does and eases analysis. In this paper, we present several methods to identify cryptographic primitives (e.g., entire algorithms or only keys) within a given binary program in an automated way. We perform fine-grained dynamic binary analysis and use the collected information as input for several heuristics that characterize specific, unique aspects of cryptographic code. Our evaluation shows that these methods improve the state-of-the-art approaches in this area and that we can successfully extract cryptographic keys from a given malware binary.",
"title": ""
},
{
"docid": "a7d3d2f52a45cdb378863d4e8d96bc27",
"text": "This paper presents a three-phase single-stage bidirectional isolated matrix based AC-DC converter for energy storage. The matrix (3 × 1) topology directly converts the three-phase line voltages into high-frequency AC voltage which is subsequently, processed using a high-frequency transformer followed by a controlled rectifier. A modified Space Vector Modulation (SVM) based switching scheme is proposed to achieve high input power quality with high power conversion efficiency. Compared to the conventional two stage converter, the proposed converter provides single-stage conversion resulting in higher power conversion efficiency and higher power density. The operating principles of the proposed converter in both AC-DC and DC-AC mode are explained followed by steady state analysis. Simulation results are presented for 230 V, 50 Hz to 48 V isolated bidirectional converter at 2 kW output power to validate the theoretical claims.",
"title": ""
},
{
"docid": "9847936462257d8f0d03473c9a78f27d",
"text": "In this paper, a vision-guided autonomous quadrotor in an air-ground multi-robot system has been proposed. This quadrotor is equipped with a monocular camera, IMUs and a flight computer, which enables autonomous flights. Two complementary pose/motion estimation methods, respectively marker-based and optical-flow-based, are developed by considering different altitudes in a flight. To achieve smooth take-off, stable tracking and safe landing with respect to a moving ground robot and desired trajectories, appropriate controllers are designed. Additionally, data synchronization and time delay compensation are applied to improve the system performance. Real-time experiments are conducted in both indoor and outdoor environments.",
"title": ""
},
{
"docid": "eb3fad94acaf1f36783fdb22f3932ec7",
"text": "This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.",
"title": ""
},
{
"docid": "bb19e122737f08997585999575d2a394",
"text": "In this paper, shadow detection and compensation are treated as image enhancement tasks. The principal components analysis (PCA) and luminance based multi-scale Retinex (LMSR) algorithm are explored to detect and compensate shadow in high resolution satellite image. PCA provides orthogonally channels, thus allow the color to remain stable despite the modification of luminance. Firstly, the PCA transform is used to obtain the luminance channel, which enables us to detect shadow regions using histogram threshold technique. After detection, the LMSR technique is used to enhance the image only in luminance channel to compensate for shadows. Then the enhanced image is obtained by inverse transform of PCA. The final shadow compensation image is obtained by comparison of the original image, the enhanced image and the shadow detection image. Experiment results show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "46938d041228481cf3363f2c6dfcc524",
"text": "This paper investigates conditions under which modi cations to the reward function of a Markov decision process preserve the op timal policy It is shown that besides the positive linear transformation familiar from utility theory one can add a reward for tran sitions between states that is expressible as the di erence in value of an arbitrary poten tial function applied to those states Further more this is shown to be a necessary con dition for invariance in the sense that any other transformation may yield suboptimal policies unless further assumptions are made about the underlying MDP These results shed light on the practice of reward shap ing a method used in reinforcement learn ing whereby additional training rewards are used to guide the learning agent In par ticular some well known bugs in reward shaping procedures are shown to arise from non potential based rewards and methods are given for constructing shaping potentials corresponding to distance based and subgoal based heuristics We show that such po tentials can lead to substantial reductions in learning time",
"title": ""
},
{
"docid": "12a214f172562d92c89183379a0c06a3",
"text": "Robots that work with people foster social relationships between people and systems. The home is an interesting place to study the adoption and use of these systems. The home provides challenges from both technical and interaction perspectives. In addition, the home is a seat for many specialized human behaviors and needs, and has a long history of what is collected and used to functionally, aesthetically, and symbolically fit the home. To understand the social impact of robotic technologies, this paper presents an ethnographic study of consumer robots in the home. Six families' experience of floor cleaning after receiving a new vacuum (a Roomba robotic vacuum or the Flair, a handheld upright) was studied. While the Flair had little impact, the Roomba changed people, cleaning activities, and other product use. In addition, people described the Roomba in aesthetic and social terms. The results of this study, while initial, generate implications for how robots should be designed for the home.",
"title": ""
},
{
"docid": "f1977e5f8fbc0df4df0ac6bf1715c254",
"text": "Instabilities in MOS-based devices with various substrates ranging from Si, SiGe, IIIV to 2D channel materials, can be explained by defect levels in the dielectrics and non-radiative multi-phonon (NMP) barriers. However, recent results obtained on single defects have demonstrated that they can show a highly complex behaviour since they can transform between various states. As a consequence, detailed physical models are complicated and computationally expensive. As will be shown here, as long as only lifetime predictions for an ensemble of defects is needed, considerable simplifications are possible. We present and validate an oxide defect model that captures the essence of full physical models while reducing the complexity substantially. We apply this model to investigate the improvement in positive bias temperature instabilities due to a reliability anneal. Furthermore, we corroborate the simulated defect bands with prior defect-centric studies and perform lifetime projections.",
"title": ""
},
{
"docid": "a6a364819f397a8e28ac0b19480253cc",
"text": "News agencies and other news providers or consumers are confronted with the task of extracting events from news articles. This is done i) either to monitor and, hence, to be informed about events of specific kinds over time and/or ii) to react to events immediately. In the past, several promising approaches to extracting events from text have been proposed. Besides purely statistically-based approaches there are methods to represent events in a semantically-structured form, such as graphs containing actions (predicates), participants (entities), etc. However, it turns out to be very difficult to automatically determine whether an event is real or not. In this paper, we give an overview of approaches which proposed solutions for this research problem. We show that there is no gold standard dataset where real events are annotated in text documents in a fine-grained, semantically-enriched way. We present a methodology of creating such a dataset with the help of crowdsourcing and present preliminary results.",
"title": ""
},
{
"docid": "ee141b7fd5c372fb65d355fe75ad47af",
"text": "As 100-Gb/s coherent systems based on polarization- division multiplexed quadrature phase shift keying (PDM-QPSK), with aggregate wavelength-division multiplexed (WDM) capacities close to 10 Tb/s, are getting widely deployed, the use of high-spectral-efficiency quadrature amplitude modulation (QAM) to increase both per-channel interface rates and aggregate WDM capacities is the next evolutionary step. In this paper we review high-spectral-efficiency optical modulation formats for use in digital coherent systems. We look at fundamental as well as at technological scaling trends and highlight important trade-offs pertaining to the design and performance of coherent higher-order QAM transponders.",
"title": ""
},
{
"docid": "7cdc858ad5837132c80ac278f3760e24",
"text": "Gallium Nitride (GaN) based power devices have the potential to achieve higher efficiency and higher switching frequency than those possible with Silicon (Si) power devices. In literature, GaN based converters are claimed to offer higher power density. However, a detailed comparative analysis on the power density of GaN and Si based low power dc-dc flyback converter is not reported. In this paper, comparison of a 100 W, dc-dc flyback converter based on GaN and Si is presented. Both the converters are designed to ensure an efficiency of 80%. Based on this, the switching frequency for both the converters are determined. The analysis shows that the GaN based converter can be operated at approximately ten times the switching frequency of Si-based converter. This leads to a reduction in the area product of the flyback transformer required in GaN based converter. It is found that the volume of the flyback transformer can be reduced by a factor of six for a GaN based converter as compared to a Si based converter. Further, it is observed that the value of output capacitance used in the GaN based converter reduces by a factor of ten as compared to the Si based converter, implying a reduction in the size of the output capacitors. Therefore, a significant improvement in the power density of the GaN based converter as compared to the Si based converter is seen.",
"title": ""
},
{
"docid": "145bbea9b4eb7c484c190aed77e2a8b2",
"text": "The Rey–Osterrieth Complex Figure Test (ROCF), which was developed by Rey in 1941 and standardized by Osterrieth in 1944, is a widely used neuropsychological test for the evaluation of visuospatial constructional ability and visual memory. Recently, the ROCF has been a useful tool for measuring executive function that is mediated by the prefrontal lobe. The ROCF consists of three test conditions: Copy, Immediate Recall and Delayed Recall. At the first step, subjects are given the ROCF stimulus card, and then asked to draw the same figure. Subsequently, they are instructed to draw what they remembered. Then, after a delay of 30 min, they are required to draw the same figure once again. The anticipated results vary according to the scoring system used, but commonly include scores related to location, accuracy and organization. Each condition of the ROCF takes 10 min to complete and the overall time of completion is about 30 min.",
"title": ""
},
{
"docid": "a2a7b5c0b4e95e0c7bcb42e29fa8db57",
"text": "0747-5632/$ see front matter 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.chb.2012.11.017 ⇑ Corresponding author. Address: School of Psychology, Australian Catholic University, 1100 Nudgee Rd., Banyo, QLD 4014, Australia. Tel.: +61 7 3623 7346; fax: +61 7 3623 7277. E-mail address: [email protected] (R. Grieve). Rachel Grieve ⇑, Michaelle Indian, Kate Witteveen, G. Anne Tolan, Jessica Marrington",
"title": ""
},
{
"docid": "38aa324964214620c55eb4edfecf1bd2",
"text": "This paper presents ROC curve, lift chart and calibration plot, three well known graphical techniques that are useful for evaluating the quality of classification models used in data mining and machine learning. Each technique, normally used and studied separately, defines its own measure of classification quality and its visualization. Here, we give a brief survey of the methods and establish a common mathematical framework which adds some new aspects, explanations and interrelations between these techniques. We conclude with an empirical evaluation and a few examples on how to use the presented techniques to boost classification accuracy.",
"title": ""
},
{
"docid": "eeb31177629a38882fa3664ad0ddfb48",
"text": "Autonomous cars will likely hit the market soon, but trust into such a technology is one of the big discussion points in the public debate. Drivers who have always been in complete control of their car are expected to willingly hand over control and blindly trust a technology that could kill them. We argue that trust in autonomous driving can be increased by means of a driver interface that visualizes the car’s interpretation of the current situation and its corresponding actions. To verify this, we compared different visualizations in a user study, overlaid to a driving scene: (1) a chauffeur avatar, (2) a world in miniature, and (3) a display of the car’s indicators as the baseline. The world in miniature visualization increased trust the most. The human-like chauffeur avatar can also increase trust, however, we did not find a significant difference between the chauffeur and the baseline. ACM Classification",
"title": ""
},
{
"docid": "14024a813302548d0bd695077185de1c",
"text": "In this paper, we propose an innovative touch-less palm print recognition system. This project is motivated by the public’s demand for non-invasive and hygienic biometric technology. For various reasons, users are concerned about touching the biometric scanners. Therefore, we propose to use a low-resolution web camera to capture the user’s hand at a distance for recognition. The users do not need to touch any device for their palm print to be acquired. A novel hand tracking and palm print region of interest (ROI) extraction technique are used to track and capture the user’s palm in real-time video stream. The discriminative palm print features are extracted based on a new method that applies local binary pattern (LBP) texture descriptor on the palm print directional gradient responses. Experiments show promising result using the proposed method. Performance can be further improved when a modified probabilistic neural network (PNN) is used for feature matching. Verification can be performed in less than one second in the proposed system. 2008 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "1c2285aef1bcd54fb2203ebb7c992647",
"text": "OBJECTIVES\nExtracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process.\n\n\nMETHODS\nWe developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables.\n\n\nRESULTS\nAt the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure.\n\n\nCONCLUSION\nComputer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system.",
"title": ""
},
{
"docid": "7f897e5994685f0b158da91cef99c855",
"text": "Cloud computing and its pay-as-you-go model continue to provide significant cost benefits and a seamless service delivery model for cloud consumers. The evolution of small-scale and large-scale geo-distributed datacenters operated and managed by individual cloud service providers raises new challenges in terms of effective global resource sharing and management of autonomously-controlled individual datacenter resources. Earlier solutions for geo-distributed clouds have focused primarily on achieving global efficiency in resource sharing that results in significant inefficiencies in local resource allocation for individual datacenters leading to unfairness in revenue and profit earned. In this paper, we propose a new contracts-based resource sharing model for federated geo-distributed clouds that allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals during a 24 hour time period. Based on the established contracts, individual cloud service providers employ a cost-aware job scheduling and provisioning algorithm that enables tasks to complete and meet their response time requirements. The proposed techniques are evaluated through extensive experiments using realistic workloads and the results demonstrate the effectiveness, scalability and resource sharing efficiency of the proposed model.",
"title": ""
}
] | scidocsrr |
480c294d9d88c3aace8b12a9c0a1d89b | A typology of crowdfunding sponsors: Birds of a feather flock together? | [
{
"docid": "e267fe4d2d7aa74ded8988fcdbfb3474",
"text": "Consumers have recently begun to play a new role in some markets: that of providing capital and investment support to the offering. This phenomenon, called crowdfunding, is a collective effort by people who network and pool their money together, usually via the Internet, in order to invest in and support efforts initiated by other people or organizations. Successful service businesses that organize crowdfunding and act as intermediaries are emerging, attesting to the viability of this means of attracting investment. Employing a “Grounded Theory” approach, this paper performs an in-depth qualitative analysis of three cases involving crowdfunding initiatives: SellaBand in the music business, Trampoline in financial services, and Kapipal in non-profit services. These cases were selected to represent a diverse set of crowdfunding operations that vary in terms of risk/return for the investorconsumer and the type of consumer involvement. The analysis offers important insights about investor behaviour in crowdfunding service models, the potential determinants of such behaviour, and variations in behaviour and determinants across different service models. The findings have implications for service managers interested in launching and/or managing crowdfunding initiatives, and for service theory in terms of extending the consumer’s role from co-production and co-creation to investment.",
"title": ""
},
{
"docid": "8654b5134dadc076a6298526e60f66fb",
"text": "Ideas competitions appear to be a promising tool for crowdsourcing and open innovation processes, especially for business-to-business software companies. active participation of potential lead users is the key to success. Yet a look at existing ideas competitions in the software field leads to the conclusion that many information technology (It)–based ideas competitions fail to meet requirements upon which active participation is established. the paper describes how activation-enabling functionalities can be systematically designed and implemented in an It-based ideas competition for enterprise resource planning software. We proceeded to evaluate the outcomes of these design measures and found that participation can be supported using a two-step model. the components of the model support incentives and motives of users. Incentives and motives of the users then support the process of activation and consequently participation throughout the ideas competition. this contributes to the successful implementation and maintenance of the ideas competition, thereby providing support for the development of promising innovative ideas. the paper concludes with a discussion of further activation-supporting components yet to be implemented and points to rich possibilities for future research in these areas.",
"title": ""
}
] | [
{
"docid": "41a0b9797c556368f84e2a05b80645f3",
"text": "This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers.",
"title": ""
},
{
"docid": "d2f2137602149b5062f60e7325d3610f",
"text": "Recently a revision of the cell theory has been proposed, which has several implications both for physiology and pathology. This revision is founded on adapting the old Julius von Sach’s proposal (1892) of the Energide as the fundamental universal unit of eukaryotic life. This view maintains that, in most instances, the living unit is the symbiotic assemblage of the cell periphery complex organized around the plasma membrane, some peripheral semi-autonomous cytosol organelles (as mitochondria and plastids, which may be or not be present), and of the Energide (formed by the nucleus, microtubules, and other satellite structures). A fundamental aspect is the proposal that the Energide plays a pivotal and organizing role of the entire symbiotic assemblage (see Appendix 1). The present paper discusses how the Energide paradigm implies a revision of the concept of the internal milieu. As a matter of fact, the Energide interacts with the cytoplasm that, in turn, interacts with the interstitial fluid, and hence with the medium that has been, classically, known as the internal milieu. Some implications of this aspect have been also presented with the help of a computational model in a mathematical Appendix 2 to the paper. Finally, relevances of the Energide concept for the information handling in the central nervous system are discussed especially in relation to the inter-Energide exchange of information.",
"title": ""
},
{
"docid": "d80580490ac7d968ff08c2a9ee159028",
"text": "Statistical relational AI (StarAI) aims at reasoning and learning in noisy domains described in terms of objects and relationships by combining probability with first-order logic. With huge advances in deep learning in the current years, combining deep networks with first-order logic has been the focus of several recent studies. Many of the existing attempts, however, only focus on relations and ignore object properties. The attempts that do consider object properties are limited in terms of modelling power or scalability. In this paper, we develop relational neural networks (RelNNs) by adding hidden layers to relational logistic regression (the relational counterpart of logistic regression). We learn latent properties for objects both directly and through general rules. Back-propagation is used for training these models. A modular, layer-wise architecture facilitates utilizing the techniques developed within deep learning community to our architecture. Initial experiments on eight tasks over three real-world datasets show that RelNNs are promising models for relational learning.",
"title": ""
},
{
"docid": "c9431b5a214dba08ca50706a27b2af7c",
"text": "For artificial general intelligence (AGI) it would be efficient if multiple users trained the same giant neural network, permitting parameter reuse, without catastrophic forgetting. PathNet is a first step in this direction. It is a neural network algorithm that uses agents embedded in the neural network whose task is to discover which parts of the network to re-use for new tasks. Agents are pathways (views) through the network which determine the subset of parameters that are used and updated by the forwards and backwards passes of the backpropogation algorithm. During learning, a tournament selection genetic algorithm is used to select pathways through the neural network for replication and mutation. Pathway fitness is the performance of that pathway measured according to a cost function. We demonstrate successful transfer learning; fixing the parameters along a path learned on task A and re-evolving a new population of paths for task B, allows task B to be learned faster than it could be learned from scratch or after fine-tuning. Paths evolved on task B re-use parts of the optimal path evolved on task A. Positive transfer was demonstrated for binary MNIST, CIFAR, and SVHN supervised learning classification tasks, and a set of Atari and Labyrinth reinforcement learning tasks, suggesting PathNets have general applicability for neural network training. Finally, PathNet also significantly improves the robustness to hyperparameter choices of a parallel asynchronous reinforcement learning algorithm (A3C).",
"title": ""
},
{
"docid": "f83481aef8fc3f61a6ecbe3548c9bde2",
"text": "Establishing unique identities for both humans and end systems has been an active research problem in the security community, giving rise to innovative machine learning-based authentication techniques. Although such techniques offer an automated method to establish identity, they have not been vetted against sophisticated attacks that target their core machine learning technique. This paper demonstrates that mimicking the unique signatures generated by host fingerprinting and biometric authentication systems is possible. We expose the ineffectiveness of underlying machine learning classification models by constructing a blind attack based around the query synthesis framework and utilizing Explainable–AI (XAI) techniques. We launch an attack in under 130 queries on a state-of-the-art face authentication system, and under 100 queries on a host authentication system. We examine how these attacks can be defended against and explore their limitations. XAI provides an effective means for adversaries to infer decision boundaries and provides a new way forward in constructing attacks against systems using machine learning models for authentication.",
"title": ""
},
{
"docid": "e99343a0ab1eb9007df4610ae35dec97",
"text": "Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL). Although SRL is naturally essential to text comprehension tasks, it is surprisingly ignored in previous work. This paper thus makes the first attempt to let SRL enhance text comprehension and inference through specifying verbal arguments and their corresponding semantic roles. In terms of deep learning models, our embeddings are enhanced by semantic role labels for more fine-grained semantics. We show that the salient labels can be conveniently added to existing models and significantly improve deep learning models in challenging text comprehension tasks. Extensive experiments on benchmark machine reading comprehension and inference datasets verify that the proposed semantic learning helps our system reach new state-of-the-art.",
"title": ""
},
{
"docid": "8856fa1c0650970da31fae67cd8dcd86",
"text": "In this paper, a new topology for rectangular waveguide bandpass and low-pass filters is presented. A simple, accurate, and robust design technique for these novel meandered waveguide filters is provided. The proposed filters employ a concatenation of ±90° $E$ -plane mitered bends (±90° EMBs) with different heights and lengths, whose dimensions are consecutively and independently calculated. Each ±90° EMB satisfies a local target reflection coefficient along the device so that they can be calculated separately. The novel structures allow drastically reduce the total length of the filters and embed bends if desired, or even to provide routing capabilities. Furthermore, the new meandered topology allows the introduction of transmission zeros above the passband of the low-pass filter, which can be controlled by the free parameters of the ±90° EMBs. A bandpass and a low-pass filter with meandered topology have been designed following the proposed novel technique. Measurements of the manufactured prototypes are also included to validate the novel topology and design technique, achieving excellent agreement with the simulation results.",
"title": ""
},
{
"docid": "f327ed315be7d47b9f63dd9498999ae4",
"text": "In this paper we propose a deep architecture for detecting people attributes (e.g. gender, race, clothing …) in surveillance contexts. Our proposal explicitly deal with poor resolution and occlusion issues that often occur in surveillance footages by enhancing the images by means of Deep Convolutional Generative Adversarial Networks (DCGAN). Experiments show that by combining both our Generative Reconstruction and Deep Attribute Classification Network we can effectively extract attributes even when resolution is poor and in presence of strong occlusions up to 80% of the whole person figure.",
"title": ""
},
{
"docid": "19a1a5d69037f0072f67c785031b0881",
"text": "In recent years, advances in the design of convolutional neural networks have resulted in signicant improvements on the image classication and object detection problems. One of the advances is networks built by stacking complex cells, seen in such networks as InceptionNet and NasNet. ese cells are either constructed by hand, generated by generative networks or discovered by search. Unlike conventional networks (where layers consist of a convolution block, sampling and non linear unit), the new cells feature more complex designs consisting of several lters and other operators connected in series and parallel. Recently, several cells have been proposed or generated that are supersets of previously proposed custom or generated cells. Inuenced by this, we introduce a network construction method based on EnvelopeNets. An EnvelopeNet is a deep convolutional neural network of stacked EnvelopeCells. EnvelopeCells are supersets (or envelopes) of previously proposed handcraed and generated cells. We propose a method to construct improved network architectures by restructuring EnvelopeNets. e algorithm restructures an EnvelopeNet by rearranging blocks in the network. It identies blocks to be restructured using metrics derived from the featuremaps collected during a partial training run of the EnvelopeNet. e method requires less computation resources to generate an architecture than an optimized architecture search over the entire search space of blocks. e restructured networks have higher accuracy on the image classication problem on a representative dataset than both the generating EnvelopeNet and an equivalent arbitrary network.",
"title": ""
},
{
"docid": "c8f3b235811dd64b9b1d35d596ff22f5",
"text": "Open domain response generation has achieved remarkable progress in recent years, but sometimes yields short and uninformative responses. We propose a new paradigm, prototypethen-edit for response generation, that first retrieves a prototype response from a pre-defined index and then edits the prototype response according to the differences between the prototype context and current context. Our motivation is that the retrieved prototype provides a good start-point for generation because it is grammatical and informative, and the post-editing process further improves the relevance and coherence of the prototype. In practice, we design a contextaware editing model that is built upon an encoder-decoder framework augmented with an editing vector. We first generate an edit vector by considering lexical differences between a prototype context and current context. After that, the edit vector and the prototype response representation are fed to a decoder to generate a new response. Experiment results on a large scale dataset demonstrate that our new paradigm significantly increases the relevance, diversity and originality of generation results, compared to traditional generative models. Furthermore, our model outperforms retrieval-based methods in terms of relevance and originality.",
"title": ""
},
{
"docid": "7709df997c72026406d257c85dacb271",
"text": "This paper addresses the task of document retrieval based on the degree of document relatedness to the meanings of a query by presenting a semantic-enabled language model. Our model relies on the use of semantic linking systems for forming a graph representation of documents and queries, where nodes represent concepts extracted from documents and edges represent semantic relatedness between concepts. Based on this graph, our model adopts a probabilistic reasoning model for calculating the conditional probability of a query concept given values assigned to document concepts. We present an integration framework for interpolating other retrieval systems with the presented model in this paper. Our empirical experiments on a number of TREC collections show that the semantic retrieval has a synergetic impact on the results obtained through state of the art keyword-based approaches, and the consideration of semantic information obtained from entity linking on queries and documents can complement and enhance the performance of other retrieval models.",
"title": ""
},
{
"docid": "2a443df82f61b198ceca472a7a080361",
"text": "Despite rapid technological advances in computer hardware and software, insecure behavior by individual computer users continues to be a significant source of direct cost and productivity loss. Why do individuals, many of whom are aware of the possible grave consequences of low-level insecure behaviors such as failure to backup work and disclosing passwords, continue to engage in unsafe computing practices? In this article we propose a conceptual model of this behavior as the outcome of a boundedly-rational choice process. We explore this model in a survey of undergraduate students (N = 167) at two large public universities. We asked about the frequency with which they engaged in five commonplace but unsafe computing practices, and probed their decision processes with regard to these practices. Although our respondents saw themselves as knowledgeable, competent users, and were broadly aware that serious consequences were quite likely to result, they reported frequent unsafe computing behaviors. We discuss the implications of these findings both for further research on risky computing practices and for training and enforcement policies that will be needed in the organizations these students will shortly be entering.",
"title": ""
},
{
"docid": "265b352775956004436b438574ee2d91",
"text": "In the fashion industry, demand forecasting is particularly complex: companies operate with a large variety of short lifecycle products, deeply influenced by seasonal sales, promotional events, weather conditions, advertising and marketing campaigns, on top of festivities and socio-economic factors. At the same time, shelf-out-of-stock phenomena must be avoided at all costs. Given the strong seasonal nature of the products that characterize the fashion sector, this paper aims to highlight how the Fourier method can represent an easy and more effective forecasting method compared to other widespread heuristics normally used. For this purpose, a comparison between the fast Fourier transform algorithm and another two techniques based on moving average and exponential smoothing was carried out on a set of 4year historical sales data of a €60+ million turnover mediumto large-sized Italian fashion company, which operates in the women’s textiles apparel and clothing sectors. The entire analysis was performed on a common spreadsheet, in order to demonstrate that accurate results exploiting advanced numerical computation techniques can be carried out without necessarily using expensive software.",
"title": ""
},
{
"docid": "903dc946b338c178634fcf9f14e1b1eb",
"text": "Detecting system anomalies is an important problem in many fields such as security, fault management, and industrial optimization. Recently, invariant network has shown to be powerful in characterizing complex system behaviours. In the invariant network, a node represents a system component and an edge indicates a stable, significant interaction between two components. Structures and evolutions of the invariance network, in particular the vanishing correlations, can shed important light on locating causal anomalies and performing diagnosis. However, existing approaches to detect causal anomalies with the invariant network often use the percentage of vanishing correlations to rank possible casual components, which have several limitations: (1) fault propagation in the network is ignored, (2) the root casual anomalies may not always be the nodes with a high percentage of vanishing correlations, (3) temporal patterns of vanishing correlations are not exploited for robust detection, and (4) prior knowledge on anomalous nodes are not exploited for (semi-)supervised detection. To address these limitations, in this article we propose a network diffusion based framework to identify significant causal anomalies and rank them. Our approach can effectively model fault propagation over the entire invariant network and can perform joint inference on both the structural and the time-evolving broken invariance patterns. As a result, it can locate high-confidence anomalies that are truly responsible for the vanishing correlations and can compensate for unstructured measurement noise in the system. Moreover, when the prior knowledge on the anomalous status of some nodes are available at certain time points, our approach is able to leverage them to further enhance the anomaly inference accuracy. When the prior knowledge is noisy, our approach also automatically learns reliable information and reduces impacts from noises. By performing extensive experiments on synthetic datasets, bank information system datasets, and coal plant cyber-physical system datasets, we demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "d59e64c1865193db3aaecc202f688690",
"text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.",
"title": ""
},
{
"docid": "70574bc8ad9fece3328ca77f17eec90f",
"text": "Five different proposed measures of similarity or semantic distance in WordNet were experimentally compared by examining their performance in a real-word spelling correction system. It was found that Jiang and Conrath’s measure gave the best results overall. That of Hirst and St-Onge seriously over-related, that of Resnik seriously under-related, and those of Lin and of Leacock and Chodorow fell in between.",
"title": ""
},
{
"docid": "7b13637b634b11b3061f7ebe0c64b3a6",
"text": "Analytical calculation methods for all the major components of the synchronous inductance of tooth-coil permanent-magnet synchronous machines are reevaluated in this paper. The inductance estimation is different in the tooth-coil machine compared with the one in the traditional rotating field winding machine. The accuracy of the analytical torque calculation highly depends on the estimated synchronous inductance. Despite powerful finite element method (FEM) tools, an accurate and fast analytical method is required at an early design stage to find an initial machine design structure with the desired performance. The results of the analytical inductance calculation are verified and assessed in terms of accuracy with the FEM simulation results and with the prototype measurement results.",
"title": ""
},
{
"docid": "f1681e1c8eef93f15adb5a4d7313c94c",
"text": "The paper investigates techniques for extracting data from HTML sites through the use of automatically generated wrappers. To automate the wrapper generation and the data extraction process, the paper develops a novel technique to compare HTML pages and generate a wrapper based on their similarities and differences. Experimental results on real-life data-intensive Web sites confirm the feasibility of the approach.",
"title": ""
},
{
"docid": "529ee26c337908488a5912835cc966c3",
"text": "Nucleic acids have emerged as powerful biological and nanotechnological tools. In biological and nanotechnological experiments, methods of extracting and purifying nucleic acids from various types of cells and their storage are critical for obtaining reproducible experimental results. In nanotechnological experiments, methods for regulating the conformational polymorphism of nucleic acids and increasing sequence selectivity for base pairing of nucleic acids are important for developing nucleic acid-based nanomaterials. However, dearth of media that foster favourable behaviour of nucleic acids has been a bottleneck for promoting the biology and nanotechnology using the nucleic acids. Ionic liquids (ILs) are solvents that may be potentially used for controlling the properties of the nucleic acids. Here, we review researches regarding the behaviour of nucleic acids in ILs. The efficiency of extraction and purification of nucleic acids from biological samples is increased by IL addition. Moreover, nucleic acids in ILs show long-term stability, which maintains their structures and enhances nuclease resistance. Nucleic acids in ILs can be used directly in polymerase chain reaction and gene expression analysis with high efficiency. Moreover, the stabilities of the nucleic acids for duplex, triplex, and quadruplex (G-quadruplex and i-motif) structures change drastically with IL cation-nucleic acid interactions. Highly sensitive DNA sensors have been developed based on the unique changes in the stability of nucleic acids in ILs. The behaviours of nucleic acids in ILs detailed here should be useful in the design of nucleic acids to use as biological and nanotechnological tools.",
"title": ""
}
] | scidocsrr |
2b3ef3368782f4c4de17ddecc03f3a18 | Habits in everyday life: thought, emotion, and action. | [
{
"docid": "a25041f4b95b68d2b8b9356d2f383b69",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
}
] | [
{
"docid": "2e6c44dd18f44512528752101f2161be",
"text": "This paper presents a LVDS (low voltage differential signal) driver, which works at 2Gbps, with a pre-emphasis circuit compensating the attenuation of limited bandwidth of channel. To make the output common-mode (CM) voltage stable over process, temperature, and supply voltage variations, a closed-loop negative feedback circuit is added in this work. The LVDS driver is designed in 0.13um CMOS technology using both thick (3.3V) and thin (1.2V) gate oxide device, simulated with transmission line model and package parasitic model. The simulated results show that this driver can operate up to 2Gbps with random data patterns.",
"title": ""
},
{
"docid": "832e1a93428911406759f696eb9cb101",
"text": "Reinforcement learning provides both qualitative and quantitative frameworks for understanding and modeling adaptive decision-making in the face of rewards and punishments. Here we review the latest dispatches from the forefront of this field, and map out some of the territories where lie monsters.",
"title": ""
},
{
"docid": "8994337878d2ac35464cb4af5e32fccc",
"text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.",
"title": ""
},
{
"docid": "65500c886a91a58ac95365c1e8539902",
"text": "This introductory overview tutorial on social network analysis (SNA) demonstrates through theory and practical case studies applications to research, particularly on social media, digital interaction and behavior records. NodeXL provides an entry point for non-programmers to access the concepts and core methods of SNA and allows anyone who can make a pie chart to now build, analyze and visualize complex networks.",
"title": ""
},
{
"docid": "9c2debf407dce58d77910ccdfc55a633",
"text": "In cybersecurity competitions, participants either create new or protect preconfigured information systems and then defend these systems against attack in a real-world setting. Institutions should consider important structural and resource-related issues before establishing such a competition. Critical infrastructures increasingly rely on information systems and on the Internet to provide connectivity between systems. Maintaining and protecting these systems requires an education in information warfare that doesn't merely theorize and describe such concepts. A hands-on, active learning experience lets students apply theoretical concepts in a physical environment. Craig Kaucher and John Saunders found that even for management-oriented graduate courses in information assurance, such an experience enhances the students' understanding of theoretical concepts. Cybersecurity exercises aim to provide this experience in a challenging and competitive environment. Many educational institutions use and implement these exercises as part of their computer science curriculum, and some are organizing competitions with commercial partners as capstone exercises, ad hoc hack-a-thons, and scenario-driven, multiday, defense-only competitions. Participants have exhibited much enthusiasm for these exercises, from the DEFCON capture-the-flag exercise to the US Military Academy's Cyber Defense Exercise (CDX). In February 2004, the US National Science Foundation sponsored the Cyber Security Exercise Workshop aimed at harnessing this enthusiasm and interest. The educators, students, and government and industry representatives attending the workshop discussed the feasibility and desirability of establishing regular cybersecurity exercises for postsecondary-level students. This article summarizes the workshop report.",
"title": ""
},
{
"docid": "eb18d3bab3346ede781d11433f1267b4",
"text": "INTRODUCTION\nIn the developing countries, diabetes mellitus as a chronic diseases, have replaced infectious diseases as the main causes of morbidity and mortality. International Diabetes Federation (IDF) recently estimates 382 million people have diabetes globally and more than 34.6 million people in the Middle East Region and this number will increase to 67.9 million by 2035. The aim of this study was to analyze Iran's research performance on diabetes in national and international context.\n\n\nMETHODS\nThis Scientometric analysis is based on the Iranian publication data in diabetes research retrieved from the Scopus citation database till the end of 2014. The string used to retrieve the data was developed using \"diabetes\" keyword in title, abstract and keywords, and finally Iran in the affiliation field was our main string.\n\n\nRESULTS\nIran's cumulative publication output in diabetes research consisted of 4425 papers from 1968 to 2014, with an average number of 96.2 papers per year and an annual average growth rate of 25.5%. Iran ranked 25th place with 4425 papers among top 25 countries with a global share of 0.72 %. Average of Iran's publication output was 6.19 citations per paper. The average citation per paper for Iranian publications in diabetes research increased from 1.63 during 1968-1999 to 10.42 for 2014.\n\n\nCONCLUSIONS\nAlthough diabetic population of Iran is increasing, number of diabetes research is not remarkable. International Diabetes Federation suggested increased funding for research in diabetes in Iran for cost-effective diabetes prevention and treatment. In addition to universal and comprehensive services for diabetes care and treatment provided by Iranian health care system, Iranian policy makers should invest more on diabetes research.",
"title": ""
},
{
"docid": "102bec350390b46415ae07128cb4e77f",
"text": "We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.",
"title": ""
},
{
"docid": "8e18fa3850177d016a85249555621723",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
},
{
"docid": "f68f82e0d7f165557433580ad1e3e066",
"text": "Four experiments demonstrate effects of prosodic structure on speech production latencies. Experiments 1 to 3 exploit a modified version of the Sternberg et al. (1978, 1980) prepared speech production paradigm to look for evidence of the generation of prosodic structure during the final stages of sentence production. Experiment 1 provides evidence that prepared sentence production latency is a function of the number of phonological words that a sentence comprises when syntactic structure, number of lexical items, and number of syllables are held constant. Experiment 2 demonstrated that production latencies in Experiment 1 were indeed determined by prosodic structure rather than the number of content words that a sentence comprised. The phonological word effect was replicated in Experiment 3 using utterances with a different intonation pattern and phrasal structure. Finally, in Experiment 4, an on-line version of the sentence production task provides evidence for the phonological word as the preferred unit of articulation during the on-line production of continuous speech. Our findings are consistent with the hypothesis that the phonological word is a unit of processing during the phonological encoding of connected speech. q 1997 Academic Press",
"title": ""
},
{
"docid": "82180726cc1aaaada69f3b6cb0e89acc",
"text": "The wheelchair is the major means of transport for physically disabled people. However, it cannot overcome architectural barriers such as curbs and stairs. In this paper, the authors proposed a method to avoid falling down of a wheeled inverted pendulum type robotic wheelchair for climbing stairs. The problem of this system is that the feedback gain of the wheels cannot be set high due to modeling errors and gear backlash, which results in the movement of wheels. Therefore, the wheels slide down the stairs or collide with the side of the stairs, and finally the wheelchair falls down. To avoid falling down, the authors proposed a slider control strategy based on skyhook model in order to decrease the movement of wheels, and a rotary link control strategy based on the staircase dimensions in order to avoid collision or slide down. The effectiveness of the proposed fall avoidance control strategy was validated by ODE simulations and the prototype wheelchair. Keywords—EPW, fall avoidance control, skyhook, wheeled inverted pendulum.",
"title": ""
},
{
"docid": "f84f7ad81967a6704490243b2b1fbbe4",
"text": "A fundamental question in frontal lobe function is how motivational and emotional parameters of behavior apply to executive processes. Recent advances in mood and personality research and the technology and methodology of brain research provide opportunities to address this question empirically. Using event-related-potentials to track error monitoring in real time, the authors demonstrated that variability in the amplitude of the error-related negativity (ERN) is dependent on mood and personality variables. College students who are high on negative affect (NA) and negative emotionality (NEM) displayed larger ERN amplitudes early in the experiment than participants who are low on these dimensions. As the high-NA and -NEM participants disengaged from the task, the amplitude of the ERN decreased. These results reveal that affective distress and associated behavioral patterns are closely related with frontal lobe executive functions.",
"title": ""
},
{
"docid": "b31aaa6805524495f57a2f54d0dd86f1",
"text": "CLINICAL HISTORY A 54-year-old white female was seen with a 10-year history of episodes of a burning sensation of the left ear. The episodes are preceded by nausea and a hot feeling for about 15 seconds and then the left ear becomes visibly red for an average of about 1 hour, with a range from about 30 minutes to 2 hours. About once every 2 years, she would have a flurry of episodes occurring over about a 1-month period during which she would average about five episodes with a range of 1 to 6. There was also an 18-year history of migraine without aura occurring about once a year. At the age of 36 years, she developed left-sided pulsatile tinnitus. A cerebral arteriogram revealed a proximal left internal carotid artery occlusion of uncertain etiology after extensive testing. An MRI scan at the age of 45 years was normal. Neurological examination was normal. A carotid ultrasound study demonstrated complete occlusion of the left internal carotid artery and a normal right. Question.—What is the diagnosis?",
"title": ""
},
{
"docid": "1d29d30089ffd9748c925a20f8a1216e",
"text": "• Users may freely distribute the URL that is used to identify this publication. • Users may download and/or print one copy of the publication from the University of Birmingham research portal for the purpose of private study or non-commercial research. • User may use extracts from the document in line with the concept of ‘fair dealing’ under the Copyright, Designs and Patents Act 1988 (?) • Users may not further distribute the material nor use it for the purposes of commercial gain.",
"title": ""
},
{
"docid": "af0dfe672a8828587e3b27ef473ea98e",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "f6e8bda7c3915fa023f1b0f88f101f46",
"text": "This paper presents a formulation to the obstacle avoidance problem for semi-autonomous ground vehicles. The planning and tracking problems have been divided into a two-level hierarchical controller. The high level solves a nonlinear model predictive control problem to generate a feasible and obstacle free path. It uses a nonlinear vehicle model and utilizes a coordinate transformation which uses vehicle position along a path as the independent variable. The low level uses a higher fidelity model and solves the MPC problem with a sequential quadratic programming approach to track the planned path. Simulations show the method’s ability to safely avoid multiple obstacles while tracking the lane centerline. Experimental tests on a semi-autonomous passenger vehicle driving at high speed on ice show the effectiveness of the approach.",
"title": ""
},
{
"docid": "3f5a6580d3c8d13a8cefaea9fd6f68b2",
"text": "Most theorizing on the relationship between corporate social/environmental performance (CSP) and corporate financial performance (CFP) assumes that the current evidence is too fractured or too variable to draw any generalizable conclusions. With this integrative, quantitative study, we intend to show that the mainstream claim that we have little generalizable knowledge about CSP and CFP is built on shaky grounds. Providing a methodologically more rigorous review than previous efforts, we conduct a meta-analysis of 52 studies (which represent the population of prior quantitative inquiry) yielding a total sample size of 33,878 observations. The metaanalytic findings suggest that corporate virtue in the form of social responsibility and, to a lesser extent, environmental responsibility is likely to pay off, although the operationalizations of CSP and CFP also moderate the positive association. For example, CSP appears to be more highly correlated with accounting-based measures of CFP than with market-based indicators, and CSP reputation indices are more highly correlated with CFP than are other indicators of CSP. This meta-analysis establishes a greater degree of certainty with respect to the CSP–CFP relationship than is currently assumed to exist by many business scholars.",
"title": ""
},
{
"docid": "ccc3cf21c4c97f9c56915b4d1e804966",
"text": "In this paper we present a prototype of a Microwave Imaging (MI) system for breast cancer detection. Our system is based on low-cost off-the-shelf microwave components, custom-made antennas, and a small form-factor processing system with an embedded Field-Programmable Gate Array (FPGA) for accelerating the execution of the imaging algorithm. We show that our system can compete with a vector network analyzer in terms of accuracy, and it is more than 20x faster than a high-performance server at image reconstruction.",
"title": ""
},
{
"docid": "6f72afeb0a2c904e17dca27f53be249e",
"text": "With its three-term functionality offering treatment of both transient and steady-state responses, proportional-integral-derivative (PID) control provides a generic and efficient solution to real-world control problems. The wide application of PID control has stimulated and sustained research and development to \"get the best out of PID\", and \"the search is on to find the next key technology or methodology for PID tuning\". This article presents remedies for problems involving the integral and derivative terms. PID design objectives, methods, and future directions are discussed. Subsequently, a computerized simulation-based approach is presented, together with illustrative design results for first-order, higher order, and nonlinear plants. Finally, we discuss differences between academic research and industrial practice, so as to motivate new research directions in PID control.",
"title": ""
},
{
"docid": "072d187f56635ebc574f2eedb8a91d14",
"text": "With the development of location-based social networks, an increasing amount of individual mobility data accumulate over time. The more mobility data are collected, the better we can understand the mobility patterns of users. At the same time, we know a great deal about online social relationships between users, providing new opportunities for mobility prediction. This paper introduces a noveltyseeking driven predictive framework for mining location-based social networks that embraces not only a bunch of Markov-based predictors but also a series of location recommendation algorithms. The core of this predictive framework is the cooperation mechanism between these two distinct models, determining the propensity of seeking novel and interesting locations.",
"title": ""
}
] | scidocsrr |
1bb30aafa0064f1e7701cab0e6b4d216 | A new approach to wafer sawing: stealth laser dicing technology | [
{
"docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3",
"text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.",
"title": ""
},
{
"docid": "b7617b5dd2a6f392f282f6a34f5b6751",
"text": "In the semiconductor market, the trend of packaging for die stacking technology moves to high density with thinner chips and higher capacity of memory devices. Moreover, the wafer sawing process is becoming more important for thin wafer, because its process speed tends to affect sawn quality and yield. ULK (Ultra low-k) device could require laser grooving application to reduce the stress during wafer sawing. Furthermore under 75um-thick thin low-k wafer is not easy to use the laser grooving application. So, UV laser dicing technology that is very useful tool for Si wafer was selected as full cut application, which has been being used on low-k wafer as laser grooving method.",
"title": ""
}
] | [
{
"docid": "c23a86bc6d8011dab71ac5e1e2051c3b",
"text": "The most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher’s flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average memory usage of AlexNet by 61% and OverFeat by 83%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA K40 GPU card containing 12 GB of memory, with 22% performance loss compared to a hypothetical GPU with enough memory to hold the entire DNN.",
"title": ""
},
{
"docid": "a0ca7d86ae79c263644c8cd5ae4c0aed",
"text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.",
"title": ""
},
{
"docid": "cb1bfa58eb89539663be0f2b4ea8e64d",
"text": "Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a ‘good’ hierarchical clustering is one that minimizes a particular cost function [21]. He showed that this cost function has certain desirable properties: in order to achieve optimal cost, disconnected components (namely, dissimilar elements) must be separated at higher levels of the hierarchy and when the similarity between data elements is identical, all clusterings achieve the same cost. We take an axiomatic approach to defining ‘good’ objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of admissible objective functions having the property that when the input admits a ‘natural’ ground-truth hierarchical clustering, the ground-truth clustering has an optimal value. We show that this set includes the objective function introduced by Dasgupta. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better and faster algorithms for hierarchical clustering. We also initiate a beyond worst-case analysis of the complexity of the problem, and design algorithms for this scenario.",
"title": ""
},
{
"docid": "662ec285031306816814378e6e192782",
"text": "One task of heterogeneous face recognition is to match a near infrared (NIR) face image to a visible light (VIS) image. In practice, there are often a few pairwise NIR-VIS face images but it is easy to collect lots of VIS face images. Therefore, how to use these unpaired VIS images to improve the NIR-VIS recognition accuracy is an ongoing issue. This paper presents a deep TransfeR NIR-VIS heterogeneous facE recognition neTwork (TRIVET) for NIR-VIS face recognition. First, to utilize large numbers of unpaired VIS face images, we employ the deep convolutional neural network (CNN) with ordinal measures to learn discriminative models. The ordinal activation function (Max-Feature-Map) is used to select discriminative features and make the models robust and lighten. Second, we transfer these models to NIR-VIS domain by fine-tuning with two types of NIR-VIS triplet loss. The triplet loss not only reduces intra-class NIR-VIS variations but also augments the number of positive training sample pairs. It makes fine-tuning deep models on a small dataset possible. The proposed method achieves state-of-the-art recognition performance on the most challenging CASIA NIR-VIS 2.0 Face Database. It achieves a new record on rank-1 accuracy of 95.74% and verification rate of 91.03% at FAR=0.001. It cuts the error rate in comparison with the best accuracy [27] by 69%.",
"title": ""
},
{
"docid": "4bc1a78a3c9749460da218fd9d314e56",
"text": "Fast and accurate side-chain conformation prediction is important for homology modeling, ab initio protein structure prediction, and protein design applications. Many methods have been presented, although only a few computer programs are publicly available. The SCWRL program is one such method and is widely used because of its speed, accuracy, and ease of use. A new algorithm for SCWRL is presented that uses results from graph theory to solve the combinatorial problem encountered in the side-chain prediction problem. In this method, side chains are represented as vertices in an undirected graph. Any two residues that have rotamers with nonzero interaction energies are considered to have an edge in the graph. The resulting graph can be partitioned into connected subgraphs with no edges between them. These subgraphs can in turn be broken into biconnected components, which are graphs that cannot be disconnected by removal of a single vertex. The combinatorial problem is reduced to finding the minimum energy of these small biconnected components and combining the results to identify the global minimum energy conformation. This algorithm is able to complete predictions on a set of 180 proteins with 34342 side chains in <7 min of computer time. The total chi(1) and chi(1 + 2) dihedral angle accuracies are 82.6% and 73.7% using a simple energy function based on the backbone-dependent rotamer library and a linear repulsive steric energy. The new algorithm will allow for use of SCWRL in more demanding applications such as sequence design and ab initio structure prediction, as well addition of a more complex energy function and conformational flexibility, leading to increased accuracy.",
"title": ""
},
{
"docid": "98788b45932c8564d29615f49407d179",
"text": "BACKGROUND\nAbnormal forms of grief, currently referred to as complicated grief or prolonged grief disorder, have been discussed extensively in recent years. While the diagnostic criteria are still debated, there is no doubt that prolonged grief is disabling and may require treatment. To date, few interventions have demonstrated efficacy.\n\n\nMETHODS\nWe investigated whether outpatients suffering from prolonged grief disorder (PGD) benefit from a newly developed integrative cognitive behavioural therapy for prolonged grief (PG-CBT). A total of 51 patients were randomized into two groups, stratified by the type of death and their relationship to the deceased; 24 patients composed the treatment group and 27 patients composed the wait list control group (WG). Treatment consisted of 20-25 sessions. Main outcome was change in grief severity; secondary outcomes were reductions in general psychological distress and in comorbidity.\n\n\nRESULTS\nPatients on average had 2.5 comorbid diagnoses in addition to PGD. Between group effect sizes were large for the improvement of grief symptoms in treatment completers (Cohen׳s d=1.61) and in the intent-to-treat analysis (d=1.32). Comorbid depressive symptoms also improved in PG-CBT compared to WG. The completion rate was 79% in PG-CBT and 89% in WG.\n\n\nLIMITATIONS\nThe major limitations of this study were a small sample size and that PG-CBT took longer than the waiting time.\n\n\nCONCLUSIONS\nPG-CBT was found to be effective with an acceptable dropout rate. Given the number of bereaved people who suffer from PGD, the results are of high clinical relevance.",
"title": ""
},
{
"docid": "58d8e3bd39fa470d1dfa321aeba53106",
"text": "There are over 1.2 million Australians registered as having vision impairment. In most cases, vision impairment severely affects the mobility and orientation of the person, resulting in loss of independence and feelings of isolation. GPS technology and its applications have now become omnipresent and are used daily to improve and facilitate the lives of many. Although a number of products specifically designed for the Blind and Vision Impaired (BVI) and relying on GPS technology have been launched, this domain is still a niche and ongoing R&D is needed to bring all the benefits of GPS in terms of information and mobility to the BVI. The limitations of GPS indoors and in urban canyons have led to the development of new systems and signals that bridge the gap and provide positioning in those environments. Although still in their infancy, there is no doubt indoor positioning technologies will one day become as pervasive as GPS. It is therefore important to design those technologies with the BVI in mind, to make them accessible from scratch. This paper will present an indoor positioning system that has been designed in that way, examining the requirements of the BVI in terms of accuracy, reliability and interface design. The system runs locally on a mid-range smartphone and relies at its core on a Kalman filter that fuses the information of all the sensors available on the phone (Wi-Fi chipset, accelerometers and magnetic field sensor). Each part of the system is tested separately as well as the final solution quality.",
"title": ""
},
{
"docid": "9eaf39d4b612c3bd272498eb8a91effc",
"text": "The relationship between the different approaches to quality in ISO standards is reviewed, contrasting the manufacturing approach to quality in ISO 9000 (quality is conformance to requirements) with the product orientation of ISO 8402 (quality is the presence of specified features) and the goal orientation of quality in use in ISO 14598-1 (quality is meeting user needs). It is shown how ISO 9241-11 enables quality in use to be measured, and ISO 13407 defines the activities necessary in the development lifecycle for achieving quality in use. APPROACHES TO QUALITY Although the term quality seems self-explanatory in everyday usage, in practice there are many different views of what it means and how it should be achieved as part of a software production process. ISO DEFINITIONS OF QUALITY ISO 9000 is concerned with quality assurance to provide confidence that a product will satisfy given requirements. Interpreted literally, this puts quality in the hands of the person producing the requirements specification a product may be deemed to have quality even if the requirements specification is inappropriate. This is one of the interpretations of quality reviewed by Garvin (1984). He describes it as Manufacturing quality: a product which conforms to specified requirements. A different emphasis is given in ISO 8402 which defines quality as the totality of characteristics of an entity that bear on its ability to satisfy stated and implied needs. This is an example of what Garvin calls Product quality: an inherent characteristic of the product determined by the presence or absence of measurable product attributes. Many organisations would like to be able to identify those attributes which can be designed into a product or evaluated to ensure quality. ISO 9126 (1992) takes this approach, and categorises the attributes of software quality as: functionality, efficiency, usability, reliability, maintainability and portability. To the extent that user needs are well-defined and common to the intended users this implies that quality is an inherent attribute of the product. However, if different groups of users have different needs, then they may require different characteristics for a product to have quality for their purposes. Assessment of quality thus becomes dependent on the perception of the user. USER PERCEIVED QUALITY AND QUALITY IN USE Garvin defines User perceived quality as the combination of product attributes which provide the greatest satisfaction to a specified user. Most approaches to quality do not deal explicitly with userperceived quality. User-perceived quality is regarded as an intrinsically inaccurate judgement of product quality. For instance Garvin, 1984, observes that \"Perceptions of quality can be as subjective as assessments of aesthetics\". However, there is a more fundamental reason for being concerned with user-perceived quality. Products can only have quality in relation to their intended purpose. For instance, the quality attributes required of an office carpet may be very different from those required of a bedroom carpet. For conventional products this is assumed to be selfevident. For general-purpose products it creates a problem. A text editor could be used by programmers for producing code, or by secretaries for producing letters. Some of the quality attributes required will be the same, but others will be different. Even for a word processor, the functionality, usability and efficiency attributes required by a trained user may be very different from those required by an occasional user. Reconciling work on usability with traditional approaches to software quality has led to another broader and potentially important view of quality which has been outside the scope of most existing quality systems. This embraces userperceived quality by relating quality to the needs of the user of an interactive product. ISO 14598-1 defines External quality as the extent to which a product satisfies stated and implied needs when used under specified conditions. This moves the focus of quality from the product in isolation to the satisfaction of the needs of particular users in particular situations. The purpose of a product is to help users achieve particular goals, which leads to the definition of Quality in use in ISO DIS 14598-1 as the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in specified environments. A product meets the requirements of the user if it is effective (accurate and complete), efficient in use of time and resources, and satisfying, regardless of the specific attributes it possesses. Specifying requirements in terms of performance has many benefits. This is recognised in the rules for drafting ISO standards (ISO, 1992) which suggest that to provide design flexibility, standards should specify the performance required of a product rather than the technical attributes needed to achieve the performance. Quality in use is a means of applying this principle to the performance which a product enables a human to achieve. An example is the ISO standard for VDT display screens (ISO 9241-3). The purpose of the standard is to ensure that the screen has the technical attributes required to achieve quality in use. The current version of the standard is specified in terms of the technical attributes of a traditional CRT. It is intended to extend the standard to permit alternative new technology screens to conform if it can be demonstrated that users are as effective, efficient and satisfied with the new screen as with an existing screen which meets the technical specifications. SOFTWARE QUALITY IN USE: ISO 14598-1 The purpose of designing an interactive system is to meet the needs of users: to provide quality in use (see Figure 1, from ISO/IEC 14598-1). The internal software attributes will determine the quality of a software product in use in a particular context. Software quality attributes are the cause, quality in use the effect. Quality in use is (or at least should be) the objective, software product quality is the means of achieving it. system behaviour external quality requirements External quality internal quality requirements Internal quality software attributes Specification Design and development Needs Quality in use Operation",
"title": ""
},
{
"docid": "b76f10452e4a4b0d7408e6350b263022",
"text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.",
"title": ""
},
{
"docid": "8387c06436e850b4fb00c6b5e0dcf19f",
"text": "Since the beginning of the epidemic, human immunodeficiency virus (HIV) has infected around 70 million people worldwide, most of whom reside is sub-Saharan Africa. There have been very promising developments in the treatment of HIV with anti-retroviral drug cocktails. However, drug resistance to anti-HIV drugs is emerging, and many people infected with HIV have adverse reactions or do not have ready access to currently available HIV chemotherapies. Thus, there is a need to discover new anti-HIV agents to supplement our current arsenal of anti-HIV drugs and to provide therapeutic options for populations with limited resources or access to currently efficacious chemotherapies. Plant-derived natural products continue to serve as a reservoir for the discovery of new medicines, including anti-HIV agents. This review presents a survey of plants that have shown anti-HIV activity, both in vitro and in vivo.",
"title": ""
},
{
"docid": "f8d01364ff29ad18480dfe5d164bbebf",
"text": "With companies such as Netflix and YouTube accounting for more than 50% of the peak download traffic on North American fixed networks in 2015, video streaming represents a significant source of Internet traffic. Multimedia delivery over the Internet has evolved rapidly over the past few years. The last decade has seen video streaming transitioning from User Datagram Protocol to Transmission Control Protocol-based technologies. Dynamic adaptive streaming over HTTP (DASH) has recently emerged as a standard for Internet video streaming. A range of rate adaptation mechanisms are proposed for DASH systems in order to deliver video quality that matches the throughput of dynamic network conditions for a richer user experience. This survey paper looks at emerging research into the application of client-side, server-side, and in-network rate adaptation techniques to support DASH-based content delivery. We provide context and motivation for the application of these techniques and review significant works in the literature from the past decade. These works are categorized according to the feedback signals used and the end-node that performs or assists with the adaptation. We also provide a review of several notable video traffic measurement and characterization studies and outline open research questions in the field.",
"title": ""
},
{
"docid": "85bc241c03d417099aa155766e6a1421",
"text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "1053653b3584180dd6f97866c13ce40a",
"text": "• • The order of authorship on this paper is random and contributions were equal. We would like to thank Ron Burt, Jim March and Mike Tushman for many helpful suggestions. Olav Sorenson provided particularly extensive comments on this paper. We would like to acknowledge the financial support of the University of Chicago, Graduate School of Business and a grant from the Kauffman Center for Entrepreneurial Leadership. Clarifying the relationship between organizational aging and innovation processes is an important step in understanding the dynamics of high-technology industries, as well as for resolving debates in organizational theory about the effects of aging on organizational functioning. We argue that aging has two seemingly contradictory consequences for organizational innovation. First, we believe that aging is associated with increases in firms' rates of innovation. Simultaneously, however, we argue that the difficulties of keeping pace with incessant external developments causes firms' innovative outputs to become obsolete relative to the most current environmental demands. These seemingly contradictory outcomes are intimately related and reflect inherent trade-offs in organizational learning and innovation processes. Multiple longitudinal analyses of the relationship between firm age and patenting behavior in the semiconductor and biotechnology industries lend support to these arguments. Introduction In an increasingly knowledge-based economy, pinpointing the factors that shape the ability of organizations to produce influential ideas and innovations is a central issue for organizational studies. Among all organizational outputs, innovation is fundamental not only because of its direct impact on the viability of firms, but also because of its profound effects on the paths of social and economic change. In this paper, we focus on an ubiquitous organizational process-aging-and examine its multifaceted influence on organizational innovation. In so doing, we address an important unresolved issue in organizational theory, namely the nature of the relationship between aging and organizational behavior (Hannan 1998). Evidence clarifying the relationship between organizational aging and innovation promises to improve our understanding of the organizational dynamics of high-technology markets, and in particular the dynamics of technological leadership. For instance, consider the possibility that aging has uniformly positive consequences for innovative activity: on the foundation of accumulated experience, older firms innovate more frequently, and their innovations have greater significance than those of younger enterprises. In this scenario, technological change paradoxically may be associated with organizational stability, as incumbent organizations come to dominate the technological frontier and their preeminence only increases with their tenure. 1 Now consider the …",
"title": ""
},
{
"docid": "e786d22cd1c30014d1a1dcdc655a56fb",
"text": "Chemical fingerprints are used to represent chemical molecules by recording the presence or absence, or by counting the number of occurrences, of particular features or substructures, such as labeled paths in the 2D graph of bonds, of the corresponding molecule. These fingerprint vectors are used to search large databases of small molecules, currently containing millions of entries, using various similarity measures, such as the Tanimoto or Tversky's measures and their variants. Here, we derive simple bounds on these similarity measures and show how these bounds can be used to considerably reduce the subset of molecules that need to be searched. We consider both the case of single-molecule and multiple-molecule queries, as well as queries based on fixed similarity thresholds or aimed at retrieving the top K hits. We study the speedup as a function of query size and distribution, fingerprint length, similarity threshold, and database size |D| and derive analytical formulas that are in excellent agreement with empirical values. The theoretical considerations and experiments show that this approach can provide linear speedups of one or more orders of magnitude in the case of searches with a fixed threshold, and achieve sublinear speedups in the range of O(|D|0.6) for the top K hits in current large databases. This pruning approach yields subsecond search times across the 5 million compounds in the ChemDB database, without any loss of accuracy.",
"title": ""
},
{
"docid": "dd271275654da4bae73ee41d76fe165c",
"text": "BACKGROUND\nThe recovery period for patients who have been in an intensive care unitis often prolonged and suboptimal. Anxiety, depression and post-traumatic stress disorder are common psychological problems. Intensive care staff offer various types of intensive aftercare. Intensive care follow-up aftercare services are not standard clinical practice in Norway.\n\n\nOBJECTIVE\nThe overall aim of this study is to investigate how adult patients experience theirintensive care stay their recovery period, and the usefulness of an information pamphlet.\n\n\nMETHOD\nA qualitative, exploratory research with semi-structured interviews of 29 survivors after discharge from intensive care and three months after discharge from the hospital.\n\n\nRESULTS\nTwo main themes emerged: \"Being on an unreal, strange journey\" and \"Balancing between who I was and who I am\" Patients' recollection of their intensive care stay differed greatly. Continuity of care and the nurse's ability to see and value individual differences was highlighted. The information pamphlet helped intensive care survivors understand that what they went through was normal.\n\n\nCONCLUSIONS\nContinuity of care and an individual approach is crucial to meet patients' uniqueness and different coping mechanisms. Intensive care survivors and their families must be included when information material and rehabilitation programs are designed and evaluated.",
"title": ""
},
{
"docid": "ec0733962301d6024da773ad9d0f636d",
"text": "This paper focuses on the design, fabrication and characterization of unimorph actuators for a microaerial flapping mechanism. PZT-5H and PZN-PT are investigated as piezoelectric layers in the unimorph actuators. Design issues for microaerial flapping actuators are discussed, and criteria for the optimal dimensions of actuators are determined. For low power consumption actuation, a square wave based electronic driving circuit is proposed. Fabricated piezoelectric unimorphs are characterized by an optical measurement system in quasi-static and dynamic mode. Experimental performance of PZT5H and PZN-PT based unimorphs is compared with desired design specifications. A 1 d.o.f. flapping mechanism with a PZT-5H unimorph is constructed, and 180◦ stroke motion at 95 Hz is achieved. Thus, it is shown that unimorphs could be promising flapping mechanism actuators.",
"title": ""
},
{
"docid": "7239b0f0a1b894c6383c538450c90e8a",
"text": "To address the problem of underexposure, underrepresentation, and underproduction of diverse professionals in the field of computing, we target middle school education using an idea that combines computational thinking with dance and movement choreography. This lightning talk delves into a virtual reality education and entertainment application named Virtual Environment Interactions (VEnvI). Our in vivo study examines how VEnvI can be used to teach fundamental computer science concepts such as sequences, loops, variables, conditionals, functions, and parallel programming. We aim to reach younger students through a fun and intuitive interface for choreographing dance movements with a virtual character. Our study contrasts the highly immersive and embodied virtual reality metaphor of using VEnvI with a non-immersive desktop metaphor. Additionally, we examine the effects of user attachment by comparing the learning results gained with customizable virtual characters in contrast with character presets. By analyzing qualitative and quantitative user responses measuring cognition, presence, usability, and satisfaction, we hope to find how virtual reality can enhance interest in the field of computer science among middle school students.",
"title": ""
},
{
"docid": "e4761bfc7c9b41881441928883660156",
"text": "This paper presents a digital low-dropout regulator (D-LDO) with a proposed transient-response boost technique, which enables the reduction of transient response time, as well as overshoot/undershoot, when the load current is abruptly drawn. The proposed D-LDO detects the deviation of the output voltage by overshoot/undershoot, and increases its loop gain, for the time that the deviation is beyond a limit. Once the output voltage is settled again, the loop gain is returned. With the D-LDO fabricated on an 110-nm CMOS technology, we measured its settling time and peak of undershoot, which were reduced by 60% and 72%, respectively, compared with and without the transient-response boost mode. Using the digital logic gates, the chip occupies a small area of 0.04 mm2, and it achieves a maximum current efficiency of 99.98%, by consuming the quiescent current of 15 μA at 0.7-V input voltage.",
"title": ""
}
] | scidocsrr |
fbbf8a4fae9225bb651a3199beed5417 | Computation offloading and resource allocation for low-power IoT edge devices | [
{
"docid": "16fbebf500be1bf69027d3a35d85362b",
"text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.",
"title": ""
},
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
},
{
"docid": "503ddcf57b4e7c1ddc4f4646fb6ca3db",
"text": "Merging the virtual World Wide Web with nearby physical devices that are part of the Internet of Things gives anyone with a mobile device and the appropriate authorization the power to monitor or control anything.",
"title": ""
},
{
"docid": "956799f28356850fda78a223a55169bf",
"text": "Despite increasing usage of mobile computing, exploiting its full potential is difficult due to its inherent problems such as resource scarcity, frequent disconnections, and mobility. Mobile cloud computing can address these problems by executing mobile applications on resource providers external to the mobile device. In this paper, we provide an extensive survey of mobile cloud computing research, while highlighting the specific concerns in mobile cloud computing. We present a taxonomy based on the key issues in this area, and discuss the different approaches taken to tackle these issues. We conclude the paper with a critical analysis of challenges that have not yet been fully met, and highlight directions for",
"title": ""
},
{
"docid": "bd820eea00766190675cd3e8b89477f2",
"text": "Mobile Edge Computing (MEC), a new concept that emerged about a year ago, integrating the IT and the Telecom worlds will have a great impact on the openness of the Telecom market. Furthermore, the virtualization revolution that has enabled the Cloud computing success will benefit the Telecom domain, which in turn will be able to support the IaaS (Infrastructure as a Service). The main objective of MEC solution is the export of some Cloud capabilities to the user's proximity decreasing the latency, augmenting the available bandwidth and decreasing the load on the core network. On the other hand, the Internet of Things (IoT), the Internet of the future, has benefited from the proliferation in the mobile phones' usage. Many mobile applications have been developed to connect a world of things (wearables, home automation systems, sensors, RFID tags etc.) to the Internet. Even if it is not a complete solution for a scalable IoT architecture but the time sensitive IoT applications (e-healthcare, real time monitoring, etc.) will profit from the MEC architecture. Furthermore, IoT can extend this paradigm to other areas (e.g. Vehicular Ad-hoc NETworks) with the use of Software Defined Network (SDN) orchestration to cope with the challenges hindering the IoT real deployment, as we will illustrate in this paper.",
"title": ""
}
] | [
{
"docid": "e602ab2a2d93a8912869ae8af0925299",
"text": "Software-based MMU emulation lies at the heart of outof-VM live memory introspection, an important technique in the cloud setting that applications such as live forensics and intrusion detection depend on. Due to the emulation, the software-based approach is much slower compared to native memory access by the guest VM. The slowness not only results in undetected transient malicious behavior, but also inconsistent memory view with the guest; both undermine the effectiveness of introspection. We propose the immersive execution environment (ImEE) with which the guest memory is accessed at native speed without any emulation. Meanwhile, the address mappings used within the ImEE are ensured to be consistent with the guest throughout the introspection session. We have implemented a prototype of the ImEE on Linux KVM. The experiment results show that ImEE-based introspection enjoys a remarkable speed up, performing several hundred times faster than the legacy method. Hence, this design is especially useful for realtime monitoring, incident response and high-intensity introspection.",
"title": ""
},
{
"docid": "34883c8cef40a0e587295b6ece1b796b",
"text": "Instance weighting has been widely applied to phrase-based machine translation domain adaptation. However, it is challenging to be applied to Neural Machine Translation (NMT) directly, because NMT is not a linear model. In this paper, two instance weighting technologies, i.e., sentence weighting and domain weighting with a dynamic weight learning strategy, are proposed for NMT domain adaptation. Empirical results on the IWSLT EnglishGerman/French tasks show that the proposed methods can substantially improve NMT performance by up to 2.7-6.7 BLEU points, outperforming the existing baselines by up to 1.6-3.6 BLEU points.",
"title": ""
},
{
"docid": "fec8129b24f30d4dbb93df4dce7885e8",
"text": "We propose a method to improve the translation of pronouns by resolving their coreference to prior mentions. We report results using two different co-reference resolution methods and point to remaining challenges.",
"title": ""
},
{
"docid": "13b9fd37b1cf4f15def39175157e12c5",
"text": "Although motorcycle safety helmets are known for preventing head injuries, in many countries, the use of motorcycle helmets is low due to the lack of police power to enforcing helmet laws. This paper presents a system which automatically detect motorcycle riders and determine that they are wearing safety helmets or not. The system extracts moving objects and classifies them as a motorcycle or other moving objects based on features extracted from their region properties using K-Nearest Neighbor (KNN) classifier. The heads of the riders on the recognized motorcycle are then counted and segmented based on projection profiling. The system classifies the head as wearing a helmet or not using KNN based on features derived from 4 sections of segmented head region. Experiment results show an average correct detection rate for near lane, far lane, and both lanes as 84%, 68%, and 74%, respectively.",
"title": ""
},
{
"docid": "f5fd1d6f15c9ef06c343378a6f7038a0",
"text": "Wayfinding is part of everyday life. This study concentrates on the development of a conceptual model of human navigation in the U.S. Interstate Highway Network. It proposes three different levels of conceptual understanding that constitute the cognitive map: the Planning Level, the Instructional Level, and the Driver Level. This paper formally defines these three levels and examines the conceptual objects that comprise them. The problem treated here is a simpler version of the open problem of planning and navigating a multi-mode trip. We expect the methods and preliminary results found here for the Interstate system to apply to other systems such as river transportation networks and railroad networks.",
"title": ""
},
{
"docid": "b22136f00469589c984081742c4605d3",
"text": "Convolutional neural network (CNN), which comprises one or more convolutional and pooling layers followed by one or more fully-connected layers, has gained popularity due to its ability to learn fruitful representations from images or speeches, capturing local dependency and slight-distortion invariance. CNN has recently been applied to the problem of activity recognition, where 1D kernels are applied to capture local dependency over time in a series of observations measured at inertial sensors (3-axis accelerometers and gyroscopes). In this paper we present a multi-modal CNN where we use 2D kernels in both convolutional and pooling layers, to capture local dependency over time as well as spatial dependency over sensors. Experiments on benchmark datasets demonstrate the high performance of our multi-modal CNN, compared to several state of the art methods.",
"title": ""
},
{
"docid": "7b5b9990bfef9d2baf28030123359923",
"text": "a r t i c l e i n f o a b s t r a c t This review takes an evolutionary and chronological perspective on the development of strategic human resource management (SHRM) literature. We divide this body of work into seven themes that reflect the directions and trends researchers have taken over approximately thirty years of research. During this time the field took shape, developed rich conceptual foundations, and matured into a domain that has substantial influence on research activities in HR and related management disciplines. We trace how the field has evolved to its current state, articulate many of the major findings and contributions, and discuss how we believe it will evolve in the future. This approach contributes to the field of SHRM by synthesizing work in this domain and by highlighting areas of research focus that have received perhaps enough attention, as well as areas of research focus that, while promising, have remained largely unexamined. 1. Introduction Boxall, Purcell, and Wright (2007) distinguish among three major subfields of human resource management (HRM): micro HRM (MHRM), strategic HRM (SHRM), and international HRM (IHRM). Micro HRM covers the subfunctions of HR policy and practice and consists of two main categories: one with managing individuals and small groups (e.g., recruitment, selection, induction, training and development, performance management, and remuneration) and the other with managing work organization and employee voice systems (including union-management relations). Strategic HRM covers the overall HR strategies adopted by business units and companies and tries to measure their impacts on performance. Within this domain both design and execution issues are examined. International HRM covers HRM in companies operating across national boundaries. Since strategic HRM often covers the international context, we will include those international HRM articles that have a strategic focus. While most of the academic literature on SHRM has been published in the last 30 years, the intellectual roots of the field can be traced back to the 1920s in the U.S. (Kaufman, 2001). The concept of labor as a human resource and the strategic view of HRM policy and practice were described and discussed by labor economists and industrial relations scholars of that period, such as John Commons. Progressive companies in the 1920s intentionally formulated and adopted innovative HR practices that represented a strategic approach to the management of labor. A small, but visibly elite group of employers in this time period …",
"title": ""
},
{
"docid": "4f537c9e63bbd967e52f22124afa4480",
"text": "Computer role playing games engage players through interleaved story and open-ended game play. We present an approach to procedurally generating, rendering, and making playable novel games based on a priori unknown story structures. These stories may be authored by humans or by computational story generation systems. Our approach couples player, designer, and algorithm to generate a novel game using preferences for game play style, general design aesthetics, and a novel story structure. Our approach is implemented in Game Forge, a system that uses search-based optimization to find and render a novel game world configuration that supports a sequence of plot points plus play style preferences. Additionally, Game Forge supports execution of the game through reactive control of game world logic and non-player character behavior.",
"title": ""
},
{
"docid": "0846f7d40f5cbbd4c199dfb58c4a4e7d",
"text": "While active learning has drawn broad attention in recent years, there are relatively few studies on stopping criterion for active learning. We here propose a novel model stability based stopping criterion, which considers the potential of each unlabeled examples to change the model once added to the training set. The underlying motivation is that active learning should terminate when the model does not change much by adding remaining examples. Inspired by the widely used stochastic gradient update rule, we use the gradient of the loss at each candidate example to measure its capability to change the classifier. Under the model change rule, we stop active learning when the changing ability of all remaining unlabeled examples is less than a given threshold. We apply the stability-based stopping criterion to two popular classifiers: logistic regression and support vector machines (SVMs). It can be generalized to a wide spectrum of learning models. Substantial experimental results on various UCI benchmark data sets have demonstrated that the proposed approach outperforms state-of-art methods in most cases.",
"title": ""
},
{
"docid": "d6136f26c7b387693a5f017e6e2e679a",
"text": "Automated seizure detection using clinical electroencephalograms is a challenging machine learning problem because the multichannel signal often has an extremely low signal to noise ratio. Events of interest such as seizures are easily confused with signal artifacts (e.g, eye movements) or benign variants (e.g., slowing). Commercially available systems suffer from unacceptably high false alarm rates. Deep learning algorithms that employ high dimensional models have not previously been effective due to the lack of big data resources. In this paper, we use the TUH EEG Seizure Corpus to evaluate a variety of hybrid deep structures including Convolutional Neural Networks and Long Short-Term Memory Networks. We introduce a novel recurrent convolutional architecture that delivers 30% sensitivity at 7 false alarms per 24 hours. We have also evaluated our system on a held-out evaluation set based on the Duke University Seizure Corpus and demonstrate that performance trends are similar to the TUH EEG Seizure Corpus. This is a significant finding because the Duke corpus was collected with different instrumentation and at different hospitals. Our work shows that deep learning architectures that integrate spatial and temporal contexts are critical to achieving state of the art performance and will enable a new generation of clinically-acceptable technology.",
"title": ""
},
{
"docid": "d39ada44eb3c1c9b5dfa1abd0f1fbc22",
"text": "The ability to computationally predict whether a compound treats a disease would improve the economy and success rate of drug approval. This study describes Project Rephetio to systematically model drug efficacy based on 755 existing treatments. First, we constructed Hetionet (neo4j.het.io), an integrative network encoding knowledge from millions of biomedical studies. Hetionet v1.0 consists of 47,031 nodes of 11 types and 2,250,197 relationships of 24 types. Data were integrated from 29 public resources to connect compounds, diseases, genes, anatomies, pathways, biological processes, molecular functions, cellular components, pharmacologic classes, side effects, and symptoms. Next, we identified network patterns that distinguish treatments from non-treatments. Then, we predicted the probability of treatment for 209,168 compound-disease pairs (het.io/repurpose). Our predictions validated on two external sets of treatment and provided pharmacological insights on epilepsy, suggesting they will help prioritize drug repurposing candidates. This study was entirely open and received realtime feedback from 40 community members.",
"title": ""
},
{
"docid": "160ab7f4c7be89ae2d56a7094e19d1a3",
"text": "These days, microarray gene expression data are playing an essential role in cancer classifications. However, due to the availability of small number of effective samples compared to the large number of genes in microarray data, many computational methods have failed to identify a small subset of important genes. Therefore, it is a challenging task to identify small number of disease-specific significant genes related for precise diagnosis of cancer sub classes. In this paper, particle swarm optimization (PSO) method along with adaptive K-nearest neighborhood (KNN) based gene selection technique are proposed to distinguish a small subset of useful genes that are sufficient for the desired classification purpose. A proper value of K would help to form the appropriate numbers of neighborhood to be explored and hence to classify the dataset accurately. Thus, a heuristic for selecting the optimal values of K efficiently, guided by the classification accuracy is also proposed. This proposed technique of finding minimum possible meaningful set of genes is applied on three benchmark microarray datasets, namely the small round blue cell tumor (SRBCT) data, the acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) data and the mixed-lineage leukemia (MLL) data. Results demonstrate the usefulness of the proposed method in terms of classification accuracy on blind test samples, number of informative genes and computing time. Further, the usefulness and universal characteristics of the identified genes are reconfirmed by using different classifiers, such as support vector machine (SVM). 2014 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "dda8427a6630411fc11e6d95dbff08b9",
"text": "Text representations using neural word embeddings have proven effective in many NLP applications. Recent researches adapt the traditional word embedding models to learn vectors of multiword expressions (concepts/entities). However, these methods are limited to textual knowledge bases (e.g., Wikipedia). In this paper, we propose a novel and simple technique for integrating the knowledge about concepts from two large scale knowledge bases of different structure (Wikipedia and Probase) in order to learn concept representations. We adapt the efficient skip-gram model to seamlessly learn from the knowledge in Wikipedia text and Probase concept graph. We evaluate our concept embedding models on two tasks: (1) analogical reasoning, where we achieve a state-of-the-art performance of 91% on semantic analogies, (2) concept categorization, where we achieve a state-of-the-art performance on two benchmark datasets achieving categorization accuracy of 100% on one and 98% on the other. Additionally, we present a case study to evaluate our model on unsupervised argument type identification for neural semantic parsing. We demonstrate the competitive accuracy of our unsupervised method and its ability to better generalize to out of vocabulary entity mentions compared to the tedious and error prone methods which depend on gazetteers and regular expressions.",
"title": ""
},
{
"docid": "ec7f5b4596ae6e2c24856d16e4fdc193",
"text": "This prospective, randomized study evaluated continuous-flow cold therapy for postoperative pain in outpatient arthroscopic anterior cruciate ligament (ACL) reconstructions. In group 1, cold therapy was constant for 3 days then as needed in days 4 through 7. Group 2 had no cold therapy. Evaluations and diaries were kept at 1, 2, and 8 hours after surgery, and then daily. Pain was assessed using the VAS and Likert scales. There were 51 cold and 49 noncold patients included. Continuous passive movement (CPM) use averaged 54 hours for cold and 41 hours for noncold groups (P=.003). Prone hangs were done for 192 minutes in the cold group and 151 minutes in the noncold group. Motion at 1 week averaged 5/88 for the cold group and 5/79 the noncold group. The noncold group average visual analog scale (VAS) pain and Likert pain scores were always greater than the cold group. The noncold group average Vicodin use (Knoll, Mt. Olive, NJ) was always greater than the cold group use (P=.001). Continuous-flow cold therapy lowered VAS and Likert scores, reduced Vicodin use, increased prone hangs, CPM, and knee flexion. Continuous-flow cold therapy is safe and effective for outpatient ACL reconstruction reducing pain medication requirements.",
"title": ""
},
{
"docid": "496e57bd6a6d06123ae886e0d6753783",
"text": "With the enormous growth of digital content in internet, various types of online reviews such as product and movie reviews present a wealth of subjective information that can be very helpful for potential users. Sentiment analysis aims to use automated tools to detect subjective information from reviews. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on stemming and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.",
"title": ""
},
{
"docid": "7bac448a5754c168c897125a4f080548",
"text": "BACKGROUND\nOne of the main methods for evaluation of fetal well-being is analysis of Doppler flow velocity waveform of fetal vessels. Evaluation of Doppler wave of the middle cerebral artery can predict most of the at-risk fetuses in high-risk pregnancies. In this study, we tried to determine the normal ranges and their trends during pregnancy of Doppler flow velocity indices (resistive index, pulsatility index, systolic-to-diastolic ratio, and peak systolic velocity) of middle cerebral artery in 20 - 40 weeks normal pregnancies in Iranians.\n\n\nMETHODS\nIn this cross-sectional study, 1037 women with normal pregnancy and gestational age of 20 to 40 weeks were investigated for fetal middle cerebral artery Doppler examination.\n\n\nRESULTS\nResistive index, pulsatility index, and systolic-to-diastolic ratio values of middle cerebral artery decreased in a parabolic pattern while the peak systolic velocity value increased linearly with progression of the gestational age. These changes were statistically significant (P<0.001 for all four variables) and were more characteristic during late weeks of pregnancy. The mean fetal heart rate was also significantly (P<0.001) reduced in correlation with the gestational age.\n\n\nCONCLUSION\nDoppler waveform indices of fetal middle cerebral artery are useful means for determining fetal well-being. Herewith, the normal ranges of Doppler waveform indices for an Iranian population are presented.",
"title": ""
},
{
"docid": "944d467bb6da4991127b76310fec585b",
"text": "One of the challenges in evaluating multi-object video detection, tracking and classification systems is having publically available data sets with which to compare different systems. However, the measures of performance for tracking and classification are different. Data sets that are suitable for evaluating tracking systems may not be appropriate for classification. Tracking video data sets typically only have ground truth track IDs, while classification video data sets only have ground truth class-label IDs. The former identifies the same object over multiple frames, while the latter identifies the type of object in individual frames. This paper describes an advancement of the ground truth meta-data for the DARPA Neovision2 Tower data set to allow both the evaluation of tracking and classification. The ground truth data sets presented in this paper contain unique object IDs across 5 different classes of object (Car, Bus, Truck, Person, Cyclist) for 24 videos of 871 image frames each. In addition to the object IDs and class labels, the ground truth data also contains the original bounding box coordinates together with new bounding boxes in instances where un-annotated objects were present. The unique IDs are maintained during occlusions between multiple objects or when objects re-enter the field of view. This will provide: a solid foundation for evaluating the performance of multi-object tracking of different types of objects, a straightforward comparison of tracking system performance using the standard Multi Object Tracking (MOT) framework, and classification performance using the Neovision2 metrics. These data have been hosted publically.",
"title": ""
},
{
"docid": "3f220d8863302719d3cf69b7d99f8c4e",
"text": "The numerical representation precision required by the computations performed by Deep Neural Networks (DNNs) varies across networks and between layers of a same network. This observation motivates a precision-based approach to acceleration which takes into account both the computational structure and the required numerical precision representation. This work presents <italic>Stripes</italic> (<italic>STR</italic>), a hardware accelerator that uses bit-serial computations to improve energy efficiency and performance. Experimental measurements over a set of state-of-the-art DNNs for image classification show that <italic>STR</italic> improves performance over a state-of-the-art accelerator from 1.35<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq1-2597140.gif\"/></alternatives></inline-formula> to 5.33<inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives><inline-graphic xlink:href=\"judd-ieq2-2597140.gif\"/> </alternatives></inline-formula> and by 2.24<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math> <alternatives><inline-graphic xlink:href=\"judd-ieq3-2597140.gif\"/></alternatives></inline-formula> on average. <italic>STR</italic>’s area and power overhead are estimated at 5 percent and 12 percent respectively. <italic> STR</italic> is 2.00<inline-formula><tex-math notation=\"LaTeX\">$\\times$</tex-math><alternatives> <inline-graphic xlink:href=\"judd-ieq4-2597140.gif\"/></alternatives></inline-formula> more energy efficient than the baseline.",
"title": ""
},
{
"docid": "c613a7c8bca5b0c198d2a1885ecb0efb",
"text": "Botnets have traditionally been seen as a threat to personal computers; however, the recent shift to mobile platforms resulted in a wave of new botnets. Due to its popularity, Android mobile Operating System became the most targeted platform. In spite of rising numbers, there is a significant gap in understanding the nature of mobile botnets and their communication characteristics. In this paper, we address this gap and provide a deep analysis of Command and Control (C&C) and built-in URLs of Android botnets detected since the first appearance of the Android platform. By combining both static and dynamic analyses with visualization, we uncover the relationships between the majority of the analyzed botnet families and offer an insight into each malicious infrastructure. As a part of this study we compile and offer to the research community a dataset containing 1929 samples representing 14 Android botnet families.",
"title": ""
},
{
"docid": "a091e8885bd30e58f6de7d14e8170199",
"text": "This paper represents the design and implementation of an indoor based navigation system for visually impaired people using a path finding algorithm and a wearable cap. This development of the navigation system consists of two modules: a Wearable part and a schematic of the area where the navigation system works by guiding the user. The wearable segment consists of a cap designed with IR receivers, an Arduino Nano processor, a headphone and an ultrasonic sensor. The schematic segment plans for the movement directions inside a room by dividing the room area into cells with a predefined matrix containing location information. For navigating the user, sixteen IR transmitters which continuously monitor the user position are placed at equal interval in the XY (8 in X-plane and 8 in Y-plane) directions of the indoor environment. A Braille keypad is used by the user where he gave the cell number for determining destination position. A path finding algorithm has been developed for determining the position of the blind person and guide him/her to his/her destination. The developed algorithm detects the position of the user by receiving continuous data from transmitter and guide the user to his/her destination by voice command. The ultrasonic sensor mounted on the cap detects the obstacles along the pathway of the visually impaired person. This proposed navigation system does not require any complex infrastructure design or the necessity of holding any extra assistive device by the user (i.e. augmented cane, smartphone, cameras). In the proposed design, prerecorded voice command will provide movement guideline to every edge of the indoor environment according to the user's destination choice. This makes this navigation system relatively simple and user friendly for those who are not much familiar with the most advanced technology and people with physical disabilities. Moreover, this proposed navigation system does not need GPS or any telecommunication networks which makes it suitable for use in rural areas where there is no telecommunication network coverage. In conclusion, the proposed system is relatively cheaper to implement in comparison to other existing navigation system, which will contribute to the betterment of the visually impaired people's lifestyle of developing and under developed countries.",
"title": ""
}
] | scidocsrr |
c3a67924b943b0a1671f266cf8d42406 | Hybrid CPU-GPU Framework for Network Motifs | [
{
"docid": "777d4e55f3f0bbb0544130931006b237",
"text": "Spatial pyramid matching is a standard architecture for categorical image retrieval. However, its performance is largely limited by the prespecified rectangular spatial regions when pooling local descriptors. In this paper, we propose to learn object-shaped and directional receptive fields for image categorization. In particular, different objects in an image are seamlessly constructed by superpixels, while the direction captures human gaze shifting path. By generating a number of superpixels in each image, we construct graphlets to describe different objects. They function as the object-shaped receptive fields for image comparison. Due to the huge number of graphlets in an image, a saliency-guided graphlet selection algorithm is proposed. A manifold embedding algorithm encodes graphlets with the semantics of training image tags. Then, we derive a manifold propagation to calculate the postembedding graphlets by leveraging visual saliency maps. The sequentially propagated graphlets constitute a path that mimics human gaze shifting. Finally, we use the learned graphlet path as receptive fields for local image descriptor pooling. The local descriptors from similar receptive fields of pairwise images more significantly contribute to the final image kernel. Thorough experiments demonstrate the advantage of our approach.",
"title": ""
}
] | [
{
"docid": "b9b6fc972d887f64401ec77e3ca1e49b",
"text": "We select a menu of seven popular decision theories and embed each theory in five models of stochastic choice, including tremble, Fechner and random utility model. We find that the estimated parameters of decision theories differ significantly when theories are combined with different models. Depending on the selected model of stochastic choice we obtain different rankings of decision theories with regard to their goodness of fit to the data. The fit of all analyzed decision theories improves significantly when they are embedded in a Fechner model of heteroscedastic truncated errors or a random utility model. Copyright 2009 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "cf751df3c52306a106fcd00eef28b1a4",
"text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.",
"title": ""
},
{
"docid": "141c28bfbeb5e71dc68d20b6220794c3",
"text": "The development of topical cosmetic anti-aging products is becoming increasingly sophisticated. This is demonstrated by the benefit agents selected and the scientific approaches used to identify them, treatment protocols that increasingly incorporate multi-product regimens, and the level of rigor in the clinical testing used to demonstrate efficacy. Consistent with these principles, a new cosmetic anti-aging regimen was recently developed. The key product ingredients were identified based on an understanding of the key mechanistic themes associated with aging at the genomic level coupled with appropriate in vitro testing. The products were designed to provide optimum benefits when used in combination in a regimen format. This cosmetic regimen was then tested for efficacy against the appearance of facial wrinkles in a 24-week clinical trial compared with 0.02% tretinoin, a recognized benchmark prescription treatment for facial wrinkling. The cosmetic regimen significantly improved wrinkle appearance after 8 weeks relative to tretinoin and was better tolerated. Wrinkle appearance benefits from the two treatments in cohorts of subjects who continued treatment through 24 weeks were also comparable.",
"title": ""
},
{
"docid": "083d5b88cc1bf5490a0783a4a94e9fb2",
"text": "Taking care and maintenance of a healthy population is the Strategy of each country. Information and communication technologies in the health care system have led to many changes in order to improve the quality of health care services to patients, rational spending time and reduce costs. In the booming field of IT research, the reach of drug delivery, information on grouping of similar drugs has been lacking. The wealth distribution and drug affordability at a certain demographic has been interlinked and proposed in this paper. Looking at the demographic we analyze and group the drugs based on target action and link this to the wealth and the people to medicine ratio, which can be accomplished via data mining and web mining. The data thus mined will be analysed and made available to public and commercial purpose for their further knowledge and benefit.",
"title": ""
},
{
"docid": "f3a7e0f63d85c069e3f2ab75dcedc671",
"text": "The commit processing in a Distributed Real Time Database (DRTDBS) can significantly increase execution time of a transaction. Therefore, designing a good commit protocol is important for the DRTDBS; the main challenge is the adaptation of standard commit protocol into the real time database system and so, decreasing the number of missed transaction in the systems. In these papers we review the basic commit protocols and the other protocols depend on it, for enhancing the transaction performance in DRTDBS. We propose a new commit protocol for reducing the number of transaction that missing their deadline. Keywords— DRTDBS, Commit protocols, Commit processing, 2PC protocol, 3PC protocol, Missed Transaction, Abort Transaction.",
"title": ""
},
{
"docid": "711ad6f6641b916f25f08a32d4a78016",
"text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "74a9612c1ca90a9d7b6152d19af53d29",
"text": "Collective entity disambiguation, or collective entity linking aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extent to which these entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, this paper shows that the semantic relationships between mentioned entities within a document are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity, and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the problem of entity disambiguation. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design Pair-Linking, a novel iterative solution for the MINTREE optimization problem. The idea of Pair-Linking is simple: instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments on 8 benchmark datasets, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.",
"title": ""
},
{
"docid": "5398b76e55bce3c8e2c1cd89403b8bad",
"text": "To cite: He A, Kwatra SG, Kazi N, et al. BMJ Case Rep Published online: [please include Day Month Year] doi:10.1136/bcr-2016215335 DESCRIPTION A woman aged 45 years presented for evaluation of skin lesions. She reported an 8–9-year history of occasionally tender, waxing-and-waning skin nodules refractory to dapsone, prednisone and methotrexate. Examination revealed multiple indurated subcutaneous nodules distributed on the upper extremities, with scattered patches of lipoatrophy in areas of nodule regression (figure 1). Her medical history was unremarkable; CBC and CMP were within normal limits, with no history of radiotherapy or evidence of internal organ involvement. She had a positive ANA titre (1:160, speckled), but negative anti-dsDNA, anti-Smith, anti-Ro and anti-La antibodies. Differential diagnosis included erythema nodosum (EN), erythema induratum of Bazin (EIB), lupus profundus (LP) and cutaneous lymphoma. Initial wedge biopsy in 2008 disclosed a predominantly lobular panniculitic process with some septal involvement (figure 2A). Broad zones of necrosis were present (figure 2B). The infiltrate consisted of a pleomorphic population of lymphocytes with occasional larger atypical lymphocytes (figure 2C). There were foci of adipocyte rimming by the atypical lymphocytes (figure 2C). Immunophenotyping revealed predominance of CD3+ T cells with some CD20+ B-cell aggregates. The atypical cells stained CD4 and CD8 in approximately equal ratios. TIA-1 was positive in many of the atypical cells but not prominently enough to render a diagnosis of cytotoxic T-cell lymphoma. T-cell receptor PCR studies showed polyclonality. Subsequent biopsies performed annually after treatment with prednisone in 2008 and 2010, dapsone in 2009 and methotrexate in 2012 showed very similar pathological and molecular features. Adipocyte rimming and TCR polyclonality persisted. EN is characterised by subcutaneous nodules on the lower extremities in association with elevated erythrocyte sedimentation rate (ESR) and C reactive protein (CRP), influenza-like prodrome preceding nodule formation and self-limiting course. Histologically, EN shows a mostly septal panniculitis with radial granulomas. EN was ruled out on the basis of normal ESR (6) and CRP (<0.1), chronic relapsing course and predominantly lobular panniculitis process histologically. EIB typically presents with violaceous nodules located on the posterior lower extremities, with arms rarely affected, of patients with a history of tuberculosis (TB). Histologically, EIB shows granulomatous inflammation with focal necrosis, vasculitis and septal fibrosis. Our patient had no evidence or history of TB infection and presented with nodules of a different clinical morphology. Ultimately, this constellation of histological and immunophenotypic findings showed an atypical panniculitic T-lymphocytic infiltrate. Although the lesion showed a lobular panniculitis with features that could be seen in subcutaneous panniculitis-like T-cell lymphoma (SPTCL), the presence of plasma cells, absence of CD8 and TIA restriction and T-cell polyclonality did not definitively support that",
"title": ""
},
{
"docid": "cb3d1448269b29807dc62aa96ff6ad1a",
"text": "OBJECTIVES\nInformation overload in electronic medical records can impede providers' ability to identify important clinical data and may contribute to medical error. An understanding of the information requirements of ICU providers will facilitate the development of information systems that prioritize the presentation of high-value data and reduce information overload. Our objective was to determine the clinical information needs of ICU physicians, compared to the data available within an electronic medical record.\n\n\nDESIGN\nProspective observational study and retrospective chart review.\n\n\nSETTING\nThree ICUs (surgical, medical, and mixed) at an academic referral center.\n\n\nSUBJECTS\nNewly admitted ICU patients and physicians (residents, fellows, and attending staff).\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe clinical information used by physicians during the initial diagnosis and treatment of admitted patients was captured using a questionnaire. Clinical information concepts were ranked according to the frequency of reported use (primary outcome) and were compared to information availability in the electronic medical record (secondary outcome). Nine hundred twenty-five of 1,277 study questionnaires (408 patients) were completed. Fifty-one clinical information concepts were identified as being useful during ICU admission. A median (interquartile range) of 11 concepts (6-16) was used by physicians per patient admission encounter with four used greater than 50% of the time. Over 25% of the clinical data available in the electronic medical record was never used, and only 33% was used greater than 50% of the time by admitting physicians.\n\n\nCONCLUSIONS\nPhysicians use a limited number of clinical information concepts at the time of patient admission to the ICU. The electronic medical record contains an abundance of unused data. Better electronic data management strategies are needed, including the priority display of frequently used clinical concepts within the electronic medical record, to improve the efficiency of ICU care.",
"title": ""
},
{
"docid": "b83e537a2c8dcd24b096005ef0cb3897",
"text": "We present Deep Speaker, a neural speaker embedding system that maps utterances to a hypersphere where speaker similarity is measured by cosine similarity. The embeddings generated by Deep Speaker can be used for many tasks, including speaker identification, verification, and clustering. We experiment with ResCNN and GRU architectures to extract the acoustic features, then mean pool to produce utterance-level speaker embeddings, and train using triplet loss based on cosine similarity. Experiments on three distinct datasets suggest that Deep Speaker outperforms a DNN-based i-vector baseline. For example, Deep Speaker reduces the verification equal error rate by 50% (relatively) and improves the identification accuracy by 60% (relatively) on a text-independent dataset. We also present results that suggest adapting from a model trained with Mandarin can improve accuracy for English speaker recognition.",
"title": ""
},
{
"docid": "f38709ee76dd9988b36812a7801f7336",
"text": "BACKGROUND\nMost individuals with mood disorders experience psychiatric and/or medical comorbidity. Available treatment guidelines for major depressive disorder (MDD) and bipolar disorder (BD) have focused on treating mood disorders in the absence of comorbidity. Treating comorbid conditions in patients with mood disorders requires sufficient decision support to inform appropriate treatment.\n\n\nMETHODS\nThe Canadian Network for Mood and Anxiety Treatments (CANMAT) task force sought to prepare evidence- and consensus-based recommendations on treating comorbid conditions in patients with MDD and BD by conducting a systematic and qualitative review of extant data. The relative paucity of studies in this area often required a consensus-based approach to selecting and sequencing treatments.\n\n\nRESULTS\nSeveral principles emerge when managing comorbidity. They include, but are not limited to: establishing the diagnosis, risk assessment, establishing the appropriate setting for treatment, chronic disease management, concurrent or sequential treatment, and measurement-based care.\n\n\nCONCLUSIONS\nEfficacy, effectiveness, and comparative effectiveness research should emphasize treatment and management of conditions comorbid with mood disorders. Clinicians are encouraged to screen and systematically monitor for comorbid conditions in all individuals with mood disorders. The common comorbidity in mood disorders raises fundamental questions about overlapping and discrete pathoetiology.",
"title": ""
},
{
"docid": "af12993c21eb626a7ab8715da1f608c9",
"text": "Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several low earth orbit satellite systems that promise worldwide connectivity and real-time voice communications. This article provides a tutorial overview of the IRIDIUM low earth orbit satellite system and performance results obtained via simulation. First, it presents an overview of key IRIDIUM design parameters and features. Then, it examines the issues associated with routing in a dynamic network topology, focusing on network management and routing algorithm selection. Finally, it presents the results of the simulation and demonstrates that the IRIDIUM system is a robust system capable of meeting published specifications.",
"title": ""
},
{
"docid": "f614df1c1775cd4e2a6927fce95ffa46",
"text": "In this paper we have designed and implemented (15, k) a BCH Encoder and decoder using VHDL for reliable data transfer in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (2 4 ) with irreducible primitive polynomial x 4 +x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoders are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to K clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. In multiple error correction method, we have implemented (15, 5 ,3 ) ,(15,7, 2) and (15, 11, 1) BCH encoder and decoder using VHDL and the simulation is done using Xilinx ISE 14.2. KeywordsBCH, BER, SNR, BCH Encoder, Decoder VHDL, Error Correction, AWGN, LFSR",
"title": ""
},
{
"docid": "81291c707a102fac24a9d5ab0665238d",
"text": "CAN bus is ISO international standard serial communication protocol. It is one of the most widely used fieldbus in the world. It has become the standard bus of embedded industrial control LAN. Ethernet is the most common communication protocol standard that is applied in the existing LAN. Networked industrial control usually adopts fieldbus and Ethernet network, thus the protocol conversion problems of the heterogeneous network composed of Ethernet and CAN bus has become one of the research hotspots in the technology of the industrial control network. STM32F103RC ARM microprocessor was used in the design of the Ethernet-CAN protocol conversion module, the simplified TCP/IP communication protocol uIP protocol was adopted to improve the efficiency of the protocol conversion and guarantee the stability of the system communication. The results of the experiments show that the designed module can realize high-speed and transparent protocol conversion.",
"title": ""
},
{
"docid": "32744d62b45f742cdab55ab462670a39",
"text": "The kinematics of manipulators is a central problem in the automatic control of robot manipulators. Theoretical background for the analysis of the 5 Dof Lynx-6 educational Robot Arm kinematics is presented in this paper. The kinematics problem is defined as the transformation from the Cartesian space to the joint space and vice versa. The Denavit-Harbenterg (D-H) model of representation is used to model robot links and joints in this study. Both forward and inverse kinematics solutions for this educational manipulator are presented, An effective method is suggested to decrease multiple solutions in inverse kinematics. A visual software package, named MSG, is also developed for testing Motional Characteristics of the Lynx-6 Robot arm. The kinematics solutions of the software package were found to be identical with the robot arm’s physical motional behaviors. Keywords—Lynx 6, robot arm, forward kinematics, inverse kinematics, software, DH parameters, 5 DOF ,SSC-32 , simulator.",
"title": ""
},
{
"docid": "189d0b173f8a9e0b3deb21398955dc3c",
"text": "Do investments in customer satisfaction lead to excess returns? If so, are these returns associated with higher stock market risk? The empirical evidence presented in this article suggests that the answer to the first question is yes, but equally remarkable, the answer to the second question is no, suggesting that satisfied customers are economic assets with high returns/low risk. Although these results demonstrate stock market imperfections with respect to the time it takes for share prices to adjust, they are consistent with previous studies in marketing in that a firm’s satisfied customers are likely to improve both the level and the stability of net cash flows. The implication, implausible as it may seem in other contexts, is high return/low risk. Specifically, the authors find that customer satisfaction, as measured by the American Customer Satisfaction Index (ACSI), is significantly related to market value of equity. Yet news about ACSI results does not move share prices. This apparent inconsistency is the catalyst for examining whether excess stock returns might be generated as a result. The authors present two stock portfolios: The first is a paper portfolio that is back tested, and the second is an actual case. At low systematic risk, both outperform the market by considerable margins. In other words, it is possible to beat the market consistently by investing in firms that do well on the ACSI.",
"title": ""
},
{
"docid": "361dc8037ebc30cd2f37f4460cf43569",
"text": "OVERVIEW: Next-generation semiconductor factories need to support miniaturization below 100 nm and have higher production efficiency, mainly of 300-mm-diameter wafers. Particularly to reduce the price of semiconductor devices, shorten development time [thereby reducing the TAT (turn-around time)], and support frequent product changeovers, semiconductor manufacturers must enhance the productivity of their systems. To meet these requirements, Hitachi proposes solutions that will support e-manufacturing on the next-generation semiconductor production line (see Fig. 1). Yasutsugu Usami Isao Kawata Hideyuki Yamamoto Hiroyoshi Mori Motoya Taniguchi, Dr. Eng.",
"title": ""
},
{
"docid": "822e6c57ea2bbb53d43e44cf1bda8833",
"text": "The investigators proposed that transgression-related interpersonal motivations result from 3 psychological parameters: forbearance (abstinence from avoidance and revenge motivations, and maintenance of benevolence), trend forgiveness (reductions in avoidance and revenge, and increases in benevolence), and temporary forgiveness (transient reductions in avoidance and revenge, and transient increases in benevolence). In 2 studies, the investigators examined this 3-parameter model. Initial ratings of transgression severity and empathy were directly related to forbearance but not trend forgiveness. Initial responsibility attributions were inversely related to forbearance but directly related to trend forgiveness. When people experienced high empathy and low responsibility attributions, they also tended to experience temporary forgiveness. The distinctiveness of each of these 3 parameters underscores the importance of studying forgiveness temporally.",
"title": ""
},
{
"docid": "0eff5b8ec08329b4a5d177baab1be512",
"text": "In the era of the Internet of Things (IoT), an enormous amount of sensing devices collect and/or generate various sensory data over time for a wide range of fields and applications. Based on the nature of the application, these devices will result in big or fast/real-time data streams. Applying analytics over such data streams to discover new information, predict future insights, and make control decisions is a crucial process that makes IoT a worthy paradigm for businesses and a quality-of-life improving technology. In this paper, we provide a thorough overview on using a class of advanced machine learning techniques, namely deep learning (DL), to facilitate the analytics and learning in the IoT domain. We start by articulating IoT data characteristics and identifying two major treatments for IoT data from a machine learning perspective, namely IoT big data analytics and IoT streaming data analytics. We also discuss why DL is a promising approach to achieve the desired analytics in these types of data and applications. The potential of using emerging DL techniques for IoT data analytics are then discussed, and its promises and challenges are introduced. We present a comprehensive background on different DL architectures and algorithms. We also analyze and summarize major reported research attempts that leveraged DL in the IoT domain. The smart IoT devices that have incorporated DL in their intelligence background are also discussed. DL implementation approaches on the fog and cloud centers in support of IoT applications are also surveyed. Finally, we shed light on some challenges and potential directions for future research. At the end of each section, we highlight the lessons learned based on our experiments and review of the recent literature.",
"title": ""
}
] | scidocsrr |
a878e2419a221c2d3ea14f442da19ba2 | Effects of Website Interactivity on Online Retail Shopping Behavior | [
{
"docid": "57b945df75d8cd446caa82ae02074c3a",
"text": "A key issue facing information systems researchers and practitioners has been the difficulty in creating favorable user reactions to new technologies. Insufficient or ineffective training has been identified as one of the key factors underlying this disappointing reality. Among the various enhancements to training being examined in research, the role of intrinsic motivation as a lever to create favorable user perceptions has not been sufficiently exploited. In this research, two studies were conducted to compare a traditional training method with a training method that included a component aimed at enhancing intrinsic motivation. The results strongly favored the use of an intrinsic motivator during training. Key implications for theory and practice are discussed. 1Allen Lee was the accepting senior editor for this paper. Sometimes when I am at my computer, I say to my wife, \"1'11 be done in just a minute\" and the next thing I know she's standing over me saying, \"It's been an hour!\" (Collins 1989, p. 11). Investment in emerging information technology applications can lead to productivity gains only if they are accepted and used. Several theoretical perspectives have emphasized the importance of user perceptions of ease of use as a key factor affecting acceptance of information technology. Favorable ease of use perceptions are necessary for initial acceptance (Davis et al. 1989), which of course is essential for adoption and continued use. During the early stages of learning and use, ease of use perceptions are significantly affected by training (e.g., Venkatesh and Davis 1996). Investments in training by organizations have been very high and have continued to grow rapidly. Kelly (1982) reported a figure of $100B, which doubled in about a decade (McKenna 1990). In spite of such large investments in training , only 10% of training leads to a change in behavior On trainees' jobs (Georgenson 1982). Therefore, it is important to understand the most effective training methods (e.g., Facteau et al. 1995) and to improve existing training methods in order to foster favorable perceptions among users about the ease of use of a technology, which in turn should lead to acceptance and usage. Prior research in psychology (e.g., Deci 1975) suggests that intrinsic motivation during training leads to beneficial outcomes. However, traditional training methods in information systems research have tended to emphasize imparting knowledge to potential users (e.g., Nelson and Cheney 1987) while not paying Sufficient attention to intrinsic motivation during training. The two field …",
"title": ""
},
{
"docid": "205ef76e947feb4bddbe86b0835e20b3",
"text": "Received: 12 July 2000 Revised: 20 August 2001 : 30 July 2002 Accepted: 15 October 2002 Abstract This paper explores factors that influence consumer’s intentions to purchase online at an electronic commerce website. Specifically, we investigate online purchase intention using two different perspectives: a technology-oriented perspective and a trust-oriented perspective. We summarise and review the antecedents of online purchase intention that have been developed within these two perspectives. An empirical study in which the contributions of both perspectives are investigated is reported. We study the perceptions of 228 potential online shoppers regarding trust and technology and their attitudes and intentions to shop online at particular websites. In terms of relative contributions, we found that the trust-antecedent ‘perceived risk’ and the technology-antecedent ‘perceived ease-of-use’ directly influenced the attitude towards purchasing online. European Journal of Information Systems (2003) 12, 41–48. doi:10.1057/ palgrave.ejis.3000445",
"title": ""
}
] | [
{
"docid": "617bb88fdb8b76a860c58fc887ab2bc4",
"text": "Although space syntax has been successfully applied to many urban GIS studies, there is still a need to develop robust algorithms that support the automated derivation of graph representations. These graph structures are needed to apply the computational principles of space syntax and derive the morphological view of an urban structure. So far the application of space syntax principles to the study of urban structures has been a partially empirical and non-deterministic task, mainly due to the fact that an urban structure is modeled as a set of axial lines whose derivation is a non-computable process. This paper proposes an alternative model of space for the application of space syntax principles, based on the concepts of characteristic points defined as the nodes of an urban structure schematised as a graph. This method has several advantages over the axial line representation: it is computable and cognitively meaningful. Our proposal is illustrated by a case study applied to the city of GaÈ vle in Sweden. We will also show that this method has several nice properties that surpass the axial line technique.",
"title": ""
},
{
"docid": "8666fe5a01f032d744a3e798241a30f6",
"text": "Emojis have gone viral on the Internet across platforms and devices. Interwoven into our daily communications, they have become a ubiquitous new language. However, little has been done to analyze the usage of emojis at scale and in depth. Why do some emojis become especially popular while others don’t? How are people using them among the words? In this work, we take the initiative to study the collective usage and behavior of emojis, and specifically, how emojis interact with their context. We base our analysis on a very large corpus collected from a popular emoji keyboard, which contains a full month of inputs from millions of users. Our analysis is empowered by a state-of-the-art machine learning tool that computes the embeddings of emojis and words in a semantic space. We find that emojis with clear semantic meanings are more likely to be adopted. While entity-related emojis are more likely to be used as alternatives to words, sentimentrelated emojis often play a complementary role in a message. Overall, emojis are significantly more prevalent in a senti-",
"title": ""
},
{
"docid": "5a5ae4ab9b802fe6d5481f90a4aa07b7",
"text": "High-dimensional pattern classification was applied to baseline and multiple follow-up MRI scans of the Alzheimer's Disease Neuroimaging Initiative (ADNI) participants with mild cognitive impairment (MCI), in order to investigate the potential of predicting short-term conversion to Alzheimer's Disease (AD) on an individual basis. MCI participants that converted to AD (average follow-up 15 months) displayed significantly lower volumes in a number of grey matter (GM) regions, as well as in the white matter (WM). They also displayed more pronounced periventricular small-vessel pathology, as well as an increased rate of increase of such pathology. Individual person analysis was performed using a pattern classifier previously constructed from AD patients and cognitively normal (CN) individuals to yield an abnormality score that is positive for AD-like brains and negative otherwise. The abnormality scores measured from MCI non-converters (MCI-NC) followed a bimodal distribution, reflecting the heterogeneity of this group, whereas they were positive in almost all MCI converters (MCI-C), indicating extensive patterns of AD-like brain atrophy in almost all MCI-C. Both MCI subgroups had similar MMSE scores at baseline. A more specialized classifier constructed to differentiate converters from non-converters based on their baseline scans provided good classification accuracy reaching 81.5%, evaluated via cross-validation. These pattern classification schemes, which distill spatial patterns of atrophy to a single abnormality score, offer promise as biomarkers of AD and as predictors of subsequent clinical progression, on an individual patient basis.",
"title": ""
},
{
"docid": "20cfcfde25db033db8d54fe7ae6fcca1",
"text": "We present the first study that evaluates both speaker and listener identification for direct speech in literary texts. Our approach consists of two steps: identification of speakers and listeners near the quotes, and dialogue chain segmentation. Evaluation results show that this approach outperforms a rule-based approach that is stateof-the-art on a corpus of literary texts.",
"title": ""
},
{
"docid": "313c68843b2521d553772dd024eec202",
"text": "In this work we perform an analysis of probabilistic approaches to recommendation upon a different validation perspective, which focuses on accuracy metrics such as recall and precision of the recommendation list. Traditionally, state-of-art approches to recommendations consider the recommendation process from a “missing value prediction” perspective. This approach simplifies the model validation phase that is based on the minimization of standard error metrics such as RMSE. However, recent studies have pointed several limitations of this approach, showing that a lower RMSE does not necessarily imply improvements in terms of specific recommendations. We demonstrate that the underlying probabilistic framework offers several advantages over traditional methods, in terms of flexibility in the generation of the recommendation list and consequently in the accuracy of recommendation.",
"title": ""
},
{
"docid": "972abdbc8667c24ae080eb2ffb7835e9",
"text": "Two important cues to female physical attractiveness are body mass index (BMI) and shape. In front view, it seems that BMI may be more important than shape; however, is it true in profile where shape cues may be stronger? There is also the question of whether men and women have the same perception of female physical attractiveness. Some studies have suggested that they do not, but this runs contrary to mate selection theory. This predicts that women will have the same perception of female attractiveness as men do. This allows them to judge their own relative value, with respect to their peer group, and match this value with the value of a prospective mate. To clarify these issues we asked 40 male and 40 female undergraduates to rate a set of pictures of real women (50 in front-view and 50 in profile) for attractiveness. BMI was the primary predictor of attractiveness in both front and profile, and the putative visual cues to BMI showed a higher degree of view-invariance than shape cues such as the waist-hip ratio (WHR). Consistent with mate selection theory, there were no significant differences in the rating of attractiveness by male and female raters.",
"title": ""
},
{
"docid": "b262ea4a0a8880d044c77acc84b0c859",
"text": "Online social networks may be important avenues for building and maintaining social capital as adult’s age. However, few studies have explicitly examined the role online communities play in the lives of seniors. In this exploratory study, U.S. seniors were interviewed to assess the impact of Facebook on social capital. Interpretive thematic analysis reveals Facebook facilitates connections to loved ones and may indirectly facilitate bonding social capital. Awareness generated via Facebook often lead to the sharing and receipt of emotional support via other channels. As such, Facebook acted as a catalyst for increasing social capital. The implication of “awareness” as a new dimension of social capital theory is discussed. Additionally, Facebook was found to have potential negative impacts on seniors’ current relationships due to open access to personal information. Finally, common concerns related to privacy, comfort with technology, and inappropriate content were revealed.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "1527c70d0b78a3d2aa6886282425c744",
"text": "Spatial and temporal contextual information plays a key role for analyzing user behaviors, and is helpful for predicting where he or she will go next. With the growing ability of collecting information, more and more temporal and spatial contextual information is collected in systems, and the location prediction problem becomes crucial and feasible. Some works have been proposed to address this problem, but they all have their limitations. Factorizing Personalized Markov Chain (FPMC) is constructed based on a strong independence assumption among different factors, which limits its performance. Tensor Factorization (TF) faces the cold start problem in predicting future actions. Recurrent Neural Networks (RNN) model shows promising performance comparing with PFMC and TF, but all these methods have problem in modeling continuous time interval and geographical distance. In this paper, we extend RNN and propose a novel method called Spatial Temporal Recurrent Neural Networks (ST-RNN). ST-RNN can model local temporal and spatial contexts in each layer with time-specific transition matrices for different time intervals and distance-specific transition matrices for different geographical distances. Experimental results show that the proposed ST-RNN model yields significant improvements over the competitive compared methods on two typical datasets, i.e., Global Terrorism Database (GTD) and Gowalla dataset.",
"title": ""
},
{
"docid": "5601a0da8cfaf42d30b139c535ae37db",
"text": "This article presents some key achievements and recommendations from the IoT6 European research project on IPv6 exploitation for the Internet of Things (IoT). It highlights the potential of IPv6 to support the integration of a global IoT deployment including legacy systems by overcoming horizontal fragmentation as well as more direct vertical integration between communicating devices and the cloud.",
"title": ""
},
{
"docid": "8758425824753fea372eeeeb18ee5856",
"text": "By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization problems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark function results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more calls of fitness functions and fewer individuals.",
"title": ""
},
{
"docid": "e7e9d6054a61a1f4a3ab7387be28538a",
"text": "Next generation deep neural networks for classification hosted on embedded platforms will rely on fast, efficient, and accurate learning algorithms. Initialization of weights in learning networks has a great impact on the classification accuracy. In this paper we focus on deriving good initial weights by modeling the error function of a deep neural network as a high-dimensional landscape. We observe that due to the inherent complexity in its algebraic structure, such an error function may conform to general results of the statistics of large systems. To this end we apply some results from Random Matrix Theory to analyse these functions. We model the error function in terms of a Hamiltonian in N-dimensions and derive some theoretical results about its general behavior. These results are further used to make better initial guesses of weights for the learning algorithm.",
"title": ""
},
{
"docid": "2c7fe5484b2184564d71a03f19188251",
"text": "This paper focuses on running scans in a main memory data processing system at \"bare metal\" speed. Essentially, this means that the system must aim to process data at or near the speed of the processor (the fastest component in most system configurations). Scans are common in main memory data processing environments, and with the state-of-the-art techniques it still takes many cycles per input tuple to apply simple predicates on a single column of a table. In this paper, we propose a technique called BitWeaving that exploits the parallelism available at the bit level in modern processors. BitWeaving operates on multiple bits of data in a single cycle, processing bits from different columns in each cycle. Thus, bits from a batch of tuples are processed in each cycle, allowing BitWeaving to drop the cycles per column to below one in some case. BitWeaving comes in two flavors: BitWeaving/V which looks like a columnar organization but at the bit level, and BitWeaving/H which packs bits horizontally. In this paper we also develop the arithmetic framework that is needed to evaluate predicates using these BitWeaving organizations. Our experimental results show that both these methods produce significant performance benefits over the existing state-of-the-art methods, and in some cases produce over an order of magnitude in performance improvement.",
"title": ""
},
{
"docid": "3aca00d6a5038876340b1fbe08e5ddb6",
"text": "People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust models. This article presents a survey of algorithmic assurances, i.e., programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.",
"title": ""
},
{
"docid": "08d1a9f3edc449ff08b45caaaf56f6ad",
"text": "Despite the theoretical and demonstrated empirical significance of parental coping strategies for the wellbeing of families of children with disabilities, relatively little research has focused explicitly on coping in mothers and fathers of children with autism. In the present study, 89 parents of preschool children and 46 parents of school-age children completed a measure of the strategies they used to cope with the stresses of raising their child with autism. Factor analysis revealed four reliable coping dimensions: active avoidance coping, problem-focused coping, positive coping, and religious/denial coping. Further data analysis suggested gender differences on the first two of these dimensions but no reliable evidence that parental coping varied with the age of the child with autism. Associations were also found between coping strategies and parental stress and mental health. Practical implications are considered including reducing reliance on avoidance coping and increasing the use of positive coping strategies.",
"title": ""
},
{
"docid": "c19bc89db255ecf88bc1514d8bd7d018",
"text": "Fulfilling the requirements of point-of-care testing (POCT) training regarding proper execution of measurements and compliance with internal and external quality control specifications is a great challenge. Our aim was to compare the values of the highly critical parameter hemoglobin (Hb) determined with POCT devices and central laboratory analyzer in the highly vulnerable setting of an emergency department in a supra maximal care hospital to assess the quality of POCT performance. In 2548 patients, Hb measurements using POCT devices (POCT-Hb) were compared with Hb measurements performed at the central laboratory (Hb-ZL). Additionally, sub collectives (WHO anemia classification, patients with Hb <8 g/dl and suprageriatric patients (age >85y.) were analyzed. Overall, the correlation between POCT-Hb and Hb-ZL was highly significant (r = 0.96, p<0.001). Mean difference was -0.44g/dl. POCT-Hb values tended to be higher than Hb-ZL values (t(2547) = 36.1, p<0.001). Standard deviation of the differences was 0.62 g/dl. Only in 26 patients (1%), absolute differences >2.5g/dl occurred. McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition for male, female and total patients (♂ p<0.001; ♀ p<0.001, total p<0.001). Hb-ZL resulted significantly more often in anemia diagnosis. In samples with Hb<8g/dl, McNemar´s test yielded no significant difference (p = 0.169). In suprageriatric patients, McNemar´s test revealed significant differences regarding anemia diagnosis according to WHO definition in male, female and total patients (♂ p<0.01; ♀ p = 0.002, total p<0.001). The difference between Hb-ZL and POCT-Hb with Hb<8g/dl was not statistically significant (<8g/dl, p = 1.000). Overall, we found a highly significant correlation between the analyzed hemoglobin concentration measurement methods, i.e. POCT devices and at the central laboratory. The results confirm the successful implementation of the presented POCT concept. Nevertheless some limitations could be identified in anemic patients stressing the importance of carefully examining clinically implausible results.",
"title": ""
},
{
"docid": "7f067f869481f06e865880e1d529adc8",
"text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.",
"title": ""
},
{
"docid": "fc9babe40365e5dc943fccf088f7a44f",
"text": "The network performance of virtual machines plays a critical role in Network Functions Virtualization (NFV), and several technologies have been developed to address hardware-level virtualization shortcomings. Recent advances in operating system level virtualization and deployment platforms such as Docker have made containers an ideal candidate for high performance application encapsulation and deployment. However, Docker and other solutions typically use lower-performing networking mechanisms. In this paper, we explore the feasibility of using technologies designed to accelerate virtual machine networking with containers, in addition to quantifying the network performance of container-based VNFs compared to the state-of-the-art virtual machine solutions. Our results show that containerized applications can provide lower latency and delay variation, and can take advantage of high performance networking technologies previously only used for hardware virtualization.",
"title": ""
},
{
"docid": "a53f26ef068d11ea21b9ba8609db6ddf",
"text": "This paper presents a novel approach based on enhanced local directional patterns (ELDP) to face recognition, which adopts local edge gradient information to represent face images. Specially, each pixel of every facial image sub-block gains eight edge response values by convolving the local 3 3 neighborhood with eight Kirsch masks, respectively. ELDP just utilizes the directions of the most encoded into a double-digit octal number to produce the ELDP codes. The ELDP dominant patterns (ELDP) are generated by statistical analysis according to the occurrence rates of the ELDP codes in a mass of facial images. Finally, the face descriptor is represented by using the global concatenated histogram based on ELDP or ELDP extracted from the face image which is divided into several sub-regions. The performances of several single face descriptors not integrated schemes are evaluated in face recognition under different challenges via several experiments. The experimental results demonstrate that the proposed method is more robust to non-monotonic illumination changes and slight noise without any filter. & 2013 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fd59754c40f05710496d3b9738f97e47",
"text": "The extent to which mental health consumers encounter stigma in their daily lives is a matter of substantial importance for their recovery and quality of life. This article summarizes the results of a nationwide survey of 1,301 mental health consumers concerning their experience of stigma and discrimination. Survey results and followup interviews with 100 respondents revealed experience of stigma from a variety of sources, including communities, families, churches, coworkers, and mental health caregivers. The majority of respondents tended to try to conceal their disorders and worried a great deal that others would find out about their psychiatric status and treat them unfavorably. They reported discouragement, hurt, anger, and lowered self-esteem as results of their experiences, and they urged public education as a means for reducing stigma. Some reported that involvement in advocacy and speaking out when stigma and discrimination were encountered helped them to cope with stigma. Limitations to generalization of results include the self-selection, relatively high functioning of participants, and respondent connections to a specific advocacy organization-the National Alliance for the Mentally Ill.",
"title": ""
}
] | scidocsrr |
a0ca633b598eb5e9d27c8b8087043df4 | End-to-End Training of Hybrid CNN-CRF Models for Stereo | [
{
"docid": "c29349c32074392e83f51b1cd214ec8a",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
},
{
"docid": "4421a42fc5589a9b91215b68e1575a3f",
"text": "We present a method for extracting depth information from a rectified image pair. Our approach focuses on the first stage of many stereo algorithms: the matching cost computation. We approach the problem by learning a similarity measure on small image patches using a convolutional neural network. Training is carried out in a supervised manner by constructing a binary classification data set with examples of similar and dissimilar pairs of patches. We examine two network architectures for this task: one tuned for speed, the other for accuracy. The output of the convolutional neural network is used to initialize the stereo matching cost. A series of post-processing steps follow: cross-based cost aggregation, semiglobal matching, a left-right consistency check, subpixel enhancement, a median filter, and a bilateral filter. We evaluate our method on the KITTI 2012, KITTI 2015, and Middlebury stereo data sets and show that it outperforms other approaches on all three data sets.",
"title": ""
},
{
"docid": "9dbf1ae31558c80aff4edf94c446b69e",
"text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"title": ""
}
] | [
{
"docid": "2ce9d2923b6b8be5027e23fb905e8b4d",
"text": "A number of recent advances have been achieved in the study of midbrain dopaminergic neurons. Understanding these advances and how they relate to one another requires a deep understanding of the computational models that serve as an explanatory framework and guide ongoing experimental inquiry. This intertwining of theory and experiment now suggests very clearly that the phasic activity of the midbrain dopamine neurons provides a global mechanism for synaptic modification. These synaptic modifications, in turn, provide the mechanistic underpinning for a specific class of reinforcement learning mechanisms that now seem to underlie much of human and animal behavior. This review describes both the critical empirical findings that are at the root of this conclusion and the fantastic theoretical advances from which this conclusion is drawn.",
"title": ""
},
{
"docid": "41c718697d19ee3ca0914255426a38ab",
"text": "Migraine is a debilitating neurological disorder that affects about 12% of the population. In the past decade, the role of the neuropeptide calcitonin gene-related peptide (CGRP) in migraine has been firmly established by clinical studies. CGRP administration can trigger migraines, and CGRP receptor antagonists ameliorate migraine. In this review, we will describe multifunctional activities of CGRP that could potentially contribute to migraine. These include roles in light aversion, neurogenic inflammation, peripheral and central sensitization of nociceptive pathways, cortical spreading depression, and regulation of nitric oxide production. Yet clearly there will be many other contributing genes that could act in concert with CGRP. One candidate is pituitary adenylate cyclase-activating peptide (PACAP), which shares some of the same actions as CGRP, including the ability to induce migraine in migraineurs and light aversive behavior in rodents. Interestingly, both CGRP and PACAP act on receptors that share an accessory subunit called receptor activity modifying protein-1 (RAMP1). Thus, comparisons between the actions of these two migraine-inducing neuropeptides, CGRP and PACAP, may provide new insights into migraine pathophysiology.",
"title": ""
},
{
"docid": "223b74ccdafcd3fafa372cd6a4fbb6cb",
"text": "Android OS experiences a blazing popularity since the last few years. This predominant platform has established itself not only in the mobile world but also in the Internet of Things (IoT) devices. This popularity, however, comes at the expense of security, as it has become a tempting target of malicious apps. Hence, there is an increasing need for sophisticated, automatic, and portable malware detection solutions. In this paper, we propose MalDozer, an automatic Android malware detection and family attribution framework that relies on sequences classification using deep learning techniques. Starting from the raw sequence of the app's API method calls, MalDozer automatically extracts and learns the malicious and the benign patterns from the actual samples to detect Android malware. MalDozer can serve as a ubiquitous malware detection system that is not only deployed on servers, but also on mobile and even IoT devices. We evaluate MalDozer on multiple Android malware datasets ranging from 1 K to 33 K malware apps, and 38 K benign apps. The results show that MalDozer can correctly detect malware and attribute them to their actual families with an F1-Score of 96%e99% and a false positive rate of 0.06% e2%, under all tested datasets and settings. © 2018 The Author(s). Published by Elsevier Ltd on behalf of DFRWS. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "589a96c8932c9657b2a2854de6390b1f",
"text": "In this paper, proactive resource allocation based on user location for point-to-point communication over fading channels is introduced, whereby the source must transmit a packet when the user requests it within a deadline of a single time slot. We introduce a prediction model in which the source predicts the request arrival $T_p$ slots ahead, where $T_p$ denotes the prediction window (PW) size. The source allocates energy to transmit some bits proactively for each time slot of the PW with the objective of reducing the transmission energy over the non-predictive case. The requests are predicted based on the user location utilizing the prior statistics about the user requests at each location. We also assume that the prediction is not perfect. We propose proactive scheduling policies to minimize the expected energy consumption required to transmit the requested packets under two different assumptions on the channel state information at the source. In the first scenario, offline scheduling, we assume the channel states are known a-priori at the source at the beginning of the PW. In the second scenario, online scheduling, it is assumed that the source has causal knowledge of the channel state. Numerical results are presented showing the gains achieved by using proactive scheduling policies compared with classical (reactive) networks. Simulation results also show that increasing the PW size leads to a significant reduction in the consumed transmission energy even with imperfect prediction.",
"title": ""
},
{
"docid": "2aaafa2da0ff13d91c37c5fd3c1c9ccc",
"text": "The development of pharmacotherapies for cocaine addiction has been disappointingly slow. However, new neurobiological knowledge of how the brain is changed by chronic pharmacological insult with cocaine is revealing novel targets for drug development. Certain drugs currently being tested in clinical trials tap into the underlying cocaine-induced neuroplasticity, including drugs promoting GABA or inhibiting glutamate transmission. Armed with rationales derived from a neurobiological perspective that cocaine addiction is a pharmacologically induced disease of neuroplasticity in brain circuits mediating normal reward learning, one can expect novel pharmacotherapies to emerge that directly target the biological pathology of addiction.",
"title": ""
},
{
"docid": "961cc1dc7063706f8f66fc136da41661",
"text": "From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible \"statistical\" properties that are the object of learning. Much less attention has been given to defining what \"learning\" is in the context of \"statistical learning.\" One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL.",
"title": ""
},
{
"docid": "2c92d42311f9708b7cb40f34551315e0",
"text": "This work characterizes electromagnetic excitation forces in interior permanent-magnet (IPM) brushless direct current (BLDC) motors and investigates their effects on noise and vibration. First, the electromagnetic excitations are classified into three sources: 1) so-called cogging torque, for which we propose an efficient technique of computation that takes into account saturation effects as a function of rotor position; 2) ripples of mutual and reluctance torque, for which we develop an equation to characterize the combination of space harmonics of inductances and flux linkages related to permanent magnets and time harmonics of current; and 3) fluctuation of attractive forces in the radial direction between the stator and rotor, for which we analyze contributions of electric currents as well as permanent magnets by the finite-element method. Then, the paper reports on an experimental investigation of influences of structural dynamic characteristics such as natural frequencies and mode shapes, as well as electromagnetic excitation forces, on noise and vibration in an IPM motor used in washing machines.",
"title": ""
},
{
"docid": "bd516d0b64e483d2210b20e4905ecd52",
"text": "With the rapid growth of the internet and the spread of the information contained therein, the volume of information available on the web is more than the ability of users to manage, capture and keep the information up to date. One solution to this problem are personalization and recommender systems. Recommender systems use the comments of the group of users so that, to help people in that group more effectively to identify their favorite items from a huge set of choices. In recent years, the web has seen very strong growth in the use of blogs. Considering the high volume of information in blogs, bloggers are in trouble to find the desired information and find blogs with similar thoughts and desires. Therefore, considering the mass of information for the blogs, a blog recommender system seems to be necessary. In this paper, by combining different methods of clustering and collaborative filtering, personalized recommender system for Persian blogs is suggested.",
"title": ""
},
{
"docid": "890a3fede570ee6777c0af7332aa0d8d",
"text": "As mobile instant messaging has become a major means of communication with the widespread use of smartphones, emoticons, symbols that are meant to indicate particular emotions in instant messages, have also developed into various forms. The primary purpose of this study is to classify the usage patterns of emoticons focusing on a particular variant known as \"stickers\" to observe individual and social characteristics of emoticon use and reinterpret the meaning of emoticons in instant messages. A qualitative approach with an in-depth semi-structured interview was used to uncover the motive in using emoticon stickers. The study suggests that besides using emoticon stickers for expressing emotions, users may have other motives: strategic and functional purposes.",
"title": ""
},
{
"docid": "d4aca467d0014b2c2359f5609a1a199b",
"text": "MATLAB is specifically designed for simulating dynamic systems. This paper describes a method of modelling impulse voltage generator using Simulink, an extension of MATLAB. The equations for modelling have been developed and a corresponding Simulink model has been constructed. It shows that Simulink program becomes very useful in studying the effect of parameter changes in the design to obtain the desired impulse voltages and waveshapes from an impulse generator.",
"title": ""
},
{
"docid": "d7e53788cbe072bdf26ea71c0a91c2b3",
"text": "3D mesh segmentation has become a crucial part of many applications in 3D shape analysis. In this paper, a comprehensive survey on 3D mesh segmentation methods is presented. Analysis of the existing methodologies is addressed taking into account a new categorization along with the performance evaluation frameworks which aim to support meaningful benchmarks not only qualitatively but also in a quantitative manner. This survey aims to capture the essence of current trends in 3D mesh segmentation.",
"title": ""
},
{
"docid": "f9ebbf082da4d72c32705b74d32e864c",
"text": "One of the most common tasks in medical imaging is semantic segmentation. Achieving this segmentation automatically has been an active area of research, but the task has been proven very challenging due to the large variation of anatomy across different patients. However, recent advances in deep learning have made it possible to significantly improve the performance of image recognition and semantic segmentation methods in the field of computer vision. Due to the data driven approaches of hierarchical feature learning in deep learning frameworks, these advances can be translated to medical images without much difficulty. Several variations of deep convolutional neural networks have been successfully applied to medical images. Especially fully convolutional architectures have been proven efficient for segmentation of 3D medical images. In this article, we describe how to build a 3D fully convolutional network (FCN) that can process 3D images in order to produce automatic semantic segmentations. The model is trained and evaluated on a clinical computed tomography (CT) dataset and shows stateof-the-art performance in multi-organ segmentation.",
"title": ""
},
{
"docid": "66d6f514c6bce09110780a1130b64dfe",
"text": "Today, with more competiveness of industries, markets, and working atmosphere in productive and service organizations what is very important for maintaining clients present, for attracting new clients and as a result increasing growth of success in organizations is having a suitable relation with clients. Bank is among organizations which are not an exception. Especially, at the moment according to increasing rate of banks` privatization, it can be argued that significance of attracting clients for banks is more than every time. The article tries to investigate effect of CRM on marketing performance in banking industry. The research method is applied and survey and descriptive. Statistical community of the research is 5 branches from Mellat Banks across Khoramabad Province and their clients. There are 45 personnel in this branch and according to Morgan Table the sample size was 40 people. Clients example was considered according to collected information, one questionnaire was designed for bank organization and another one was prepared for banks` clients in which reliability and validity are approved. The research result indicates that CRM is ineffective on marketing performance.",
"title": ""
},
{
"docid": "bd7f4a27628506eb707918c990704405",
"text": "A multi database model of distributed information retrieval is presented in which people are assumed to have access to many searchable text databases In such an environment full text information retrieval consists of discovering database contents ranking databases by their expected ability to satisfy the query searching a small number of databases and merging results returned by di erent databases This paper presents algorithms for each task It also discusses how to reorganize conventional test collections into multi database testbeds and evaluation methodologies for multi database experiments A broad and diverse group of experimental results is presented to demonstrate that the algorithms are e ective e cient robust and scalable",
"title": ""
},
{
"docid": "a25e2540e97918b954acbb6fdee57eb7",
"text": "Tweet streams provide a variety of real-life and real-time information on social events that dynamically change over time. Although social event detection has been actively studied, how to efficiently monitor evolving events from continuous tweet streams remains open and challenging. One common approach for event detection from text streams is to use single-pass incremental clustering. However, this approach does not track the evolution of events, nor does it address the issue of efficient monitoring in the presence of a large number of events. In this paper, we capture the dynamics of events using four event operations (create, absorb, split, and merge), which can be effectively used to monitor evolving events. Moreover, we propose a novel event indexing structure, called Multi-layer Inverted List (MIL), to manage dynamic event databases for the acceleration of large-scale event search and update. We thoroughly study the problem of nearest neighbour search using MIL based on upper bound pruning, along with incremental index maintenance. Extensive experiments have been conducted on a large-scale real-life tweet dataset. The results demonstrate the promising performance of our event indexing and monitoring methods on both efficiency and effectiveness.",
"title": ""
},
{
"docid": "096772152c72d8c8fb1650a825a47d2b",
"text": "The analysis of the topology and organization of brain networks is known to greatly benefit from network measures in graph theory. However, to evaluate dynamic changes of brain functional connectivity, more sophisticated quantitative metrics characterizing temporal evolution of brain topological features are required. To simplify conversion of time-varying brain connectivity to a static graph representation is straightforward but the procedure loses temporal information that could be critical in understanding the brain functions. To extend the understandings of functional segregation and integration to a dynamic fashion, we recommend dynamic graph metrics to characterise temporal changes of topological features of brain networks. This study investigated functional segregation and integration of brain networks over time by dynamic graph metrics derived from EEG signals during an experimental protocol: performance of complex flight simulation tasks with multiple levels of difficulty. We modelled time-varying brain functional connectivity as multi-layer networks, in which each layer models brain connectivity at time window $t+\\Delta t$ . Dynamic graph metrics were calculated to quantify temporal and topological properties of the network. Results show that brain networks under the performance of complex tasks reveal a dynamic small-world architecture with a number of frequently connected nodes or hubs, which supports the balance of information segregation and integration in brain over time. The results also show that greater cognitive workloads caused by more difficult tasks induced a more globally efficient but less clustered dynamic small-world functional network. Our study illustrates that task-related changes of functional brain network segregation and integration can be characterized by dynamic graph metrics.",
"title": ""
},
{
"docid": "a0285beac2a4e94f295df24033c61c7a",
"text": "EUCAST expert rules have been developed to assist clinical microbiologists and describe actions to be taken in response to specific antimicrobial susceptibility test results. They include recommendations on reporting, such as inferring susceptibility to other agents from results with one, suppression of results that may be inappropriate, and editing of results from susceptible to intermediate or resistant or from intermediate to resistant on the basis of an inferred resistance mechanism. They are based on current clinical and/or microbiological evidence. EUCAST expert rules also include intrinsic resistance phenotypes and exceptional resistance phenotypes, which have not yet been reported or are very rare. The applicability of EUCAST expert rules depends on the MIC breakpoints used to define the rules. Setting appropriate clinical breakpoints, based on treating patients and not on the detection of resistance mechanisms, may lead to modification of some expert rules in the future.",
"title": ""
},
{
"docid": "6dc9ebf5dea1c78e1688a560f241f804",
"text": "This paper reports finding from a study carried out in a remote rural area of Bangladesh during December 2000. Nineteen key informants were interviewed for collecting data on domestic violence against women. Each key informant provided information about 10 closest neighbouring ever-married women covering a total of 190 women. The questionnaire included information about frequency of physical violence, verbal abuse, and other relevant information, including background characteristics of the women and their husbands. 50.5% of the women were reported to be battered by their husbands and 2.1% by other family members. Beating by the husband was negatively related with age of husband: the odds of beating among women with husbands aged less than 30 years were six times of those with husbands aged 50 years or more. Members of micro-credit societies also had higher odds of being beaten than non-members. The paper discusses the possibility of community-centred interventions by raising awareness about the violation of human rights issues and other legal and psychological consequences to prevent domestic violence against women.",
"title": ""
},
{
"docid": "bf232413f2c1ba11bfa0ccbba3ed4010",
"text": "Software Defined Networking (SDN) is an emerging promising paradigm for network management because of its centralized network intelligence. However, the centralized control architecture of the software-defined networks (SDNs) brings novel challenges of reliability, scalability, fault tolerance and interoperability. In this paper, we proposed a novel clustered distributed controller architecture in the real setting of SDNs. The distributed cluster implementation comprises of multiple popular SDN controllers. The proposed mechanism is evaluated using a real world network topology running on top of an emulated SDN environment. The result shows that the proposed distributed controller clustering mechanism is able to significantly reduce the average latency from 8.1% to 1.6%, the packet loss from 5.22% to 4.15%, compared to distributed controller without clustering running on HP Virtual Application Network (VAN) SDN and Open Network Operating System (ONOS) controllers respectively. Moreover, proposed method also shows reasonable CPU utilization results. Furthermore, the proposed mechanism makes possible to handle unexpected load fluctuations while maintaining a continuous network operation, even when there is a controller failure. The paper is a potential contribution stepping towards addressing the issues of reliability, scalability, fault tolerance, and inter-operability.",
"title": ""
},
{
"docid": "da416ce58897f6f86d9cd7b0de422508",
"text": "In linear representation based face recognition (FR), it is expected that a discriminative dictionary can be learned from the training samples so that the query sample can be better represented for classification. On the other hand, dimensionality reduction is also an important issue for FR. It can not only reduce significantly the storage space of face images, but also enhance the discrimination of face feature. Existing methods mostly perform dimensionality reduction and dictionary learning separately, which may not fully exploit the discriminative information in the training samples. In this paper, we propose to learn jointly the projection matrix for dimensionality reduction and the discriminative dictionary for face representation. The joint learning makes the learned projection and dictionary better fit with each other so that a more effective face classification can be obtained. The proposed algorithm is evaluated on benchmark face databases in comparison with existing linear representation based methods, and the results show that the joint learning improves the FR rate, particularly when the number of training samples per class is small.",
"title": ""
}
] | scidocsrr |
d234c5f58bdf816d4e53862e5714cf5c | How Random Walks Can Help Tourism | [
{
"docid": "ae9469b80390e5e2e8062222423fc2cd",
"text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
}
] | [
{
"docid": "573fd558864a9c05fef5935a6074c3bc",
"text": "Recurrent Neural Networks (RNNs) play a major role in the field of sequential learning, and have outperformed traditional algorithms on many benchmarks. Training deep RNNs still remains a challenge, and most of the state-of-the-art models are structured with a transition depth of 2-4 layers. Recurrent Highway Networks (RHNs) were introduced in order to tackle this issue. These have achieved state-of-the-art performance on a few benchmarks using a depth of 10 layers. However, the performance of this architecture suffers from a bottleneck, and ceases to improve when an attempt is made to add more layers. In this work, we analyze the causes for this, and postulate that the main source is the way that the information flows through time. We introduce a novel and simple variation for the RHN cell, called Highway State Gating (HSG), which allows adding more layers, while continuing to improve performance. By using a gating mechanism for the state, we allow the net to ”choose” whether to pass information directly through time, or to gate it. This mechanism also allows the gradient to back-propagate directly through time and, therefore, results in a slightly faster convergence. We use the Penn Treebank (PTB) dataset as a platform for empirical proof of concept. Empirical results show that the improvement due to Highway State Gating is for all depths, and as the depth increases, the improvement also increases.",
"title": ""
},
{
"docid": "0b29e6813c08637d8df1a472e0e323b6",
"text": "A significant number of promising applications for vehicular ad hoc networks (VANETs) are becoming a reality. Most of these applications require a variety of heterogenous content to be delivered to vehicles and to their on-board users. However, the task of content delivery in such dynamic and large-scale networks is easier said than done. In this article, we propose a classification of content delivery solutions applied to VANETs while highlighting their new characteristics and describing their underlying architectural design. First, the two fundamental building blocks that are part of an entire content delivery system are identified: replica allocation and content delivery. The related solutions are then classified according to their architectural definition. Within each category, solutions are described based on the techniques and strategies that have been adopted. As result, we present an in-depth discussion on the architecture, techniques, and strategies adopted by studies in the literature that tackle problems related to vehicular content delivery networks.",
"title": ""
},
{
"docid": "1d1f14cb78693e56d014c89eacfcc3ef",
"text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.",
"title": ""
},
{
"docid": "eee9b5301c83faf4fe8fd786f0d99efd",
"text": "We present a named entity recognition and classification system that uses only probabilistic character-level features. Classifications by multiple orthographic tries are combined in a hidden Markov model framework to incorporate both internal and contextual evidence. As part of the system, we perform a preprocessing stage in which capitalisation is restored to sentence-initial and all-caps words with high accuracy. We report f-values of 86.65 and 79.78 for English, and 50.62 and 54.43 for the German datasets.",
"title": ""
},
{
"docid": "408d3db3b2126990611fdc3a62a985ea",
"text": "Multi-choice reading comprehension is a challenging task, which involves the matching between a passage and a question-answer pair. This paper proposes a new co-matching approach to this problem, which jointly models whether a passage can match both a question and a candidate answer. Experimental results on the RACE dataset demonstrate that our approach achieves state-of-the-art performance.",
"title": ""
},
{
"docid": "62a7c4bd564a7741cd966f3e11487236",
"text": "This paper presents an implementation method for the people counting system which detects and tracks moving people using a fixed single camera. The main contribution of this paper is the novel head detection method based on body’s geometry. A novel body descriptor is proposed for finding people’s head which is defined as Body Feature Rectangle (BFR). First, a vertical projection method is used to get the line which divides touching persons into individuals. Second, a special inscribed rectangle is found to locate the neck position which describes the torso area. Third, locations of people’s heads can be got according to its neck-positions. Last, a robust counting method named MEA is proposed to get the real counts of walking people flows. The proposed method can divide the multiple-people image into individuals whatever people merge with each other or not. Moreover, the passing people can be counted accurately under the influence of wearing hats. Experimental results show that our proposed method can nearly reach to an accuracy of 100% if the number of a people-merging pattern is less than six. Keywords-People Counting; Head Detection; BFR; People-flow Tracking",
"title": ""
},
{
"docid": "bc3c7f4fb6d9a2fd12fb702a69a35b23",
"text": "Vestibular migraine is a chameleon among the episodic vertigo syndromes because considerable variation characterizes its clinical manifestation. The attacks may last from seconds to days. About one-third of patients presents with monosymptomatic attacks of vertigo or dizziness without headache or other migrainous symptoms. During attacks most patients show spontaneous or positional nystagmus and in the attack-free interval minor ocular motor and vestibular deficits. Women are significantly more often affected than men. Symptoms may begin at any time in life, with the highest prevalence in young adults and between the ages of 60 and 70. Over the last 10 years vestibular migraine has evolved into a medical entity in dizziness units. It is the most common cause of spontaneous recurrent episodic vertigo and accounts for approximately 10% of patients with vertigo and dizziness. Its broad spectrum poses a diagnostic problem of how to rule out Menière's disease or vestibular paroxysmia. Vestibular migraine should be included in the International Headache Classification of Headache Disorders (ICHD) as a subcategory of migraine. It should, however, be kept separate and distinct from basilar-type migraine and benign paroxysmal vertigo of childhood. We prefer the term \"vestibular migraine\" to \"migrainous vertigo,\" because the latter may also refer to various vestibular and non-vestibular symptoms. Antimigrainous medication to treat the single attack and to prevent recurring attacks appears to be effective, but the published evidence is weak. A randomized, double-blind, placebo-controlled study is required to evaluate medical treatment of this condition.",
"title": ""
},
{
"docid": "5f1a273e8419836388faa49df63330c4",
"text": "In this paper, the traditional k-modes clustering algorithm is extended by weighting attribute value matches in dissimilarity computation. The use of attribute value weighting technique makes it possible to generate clusters with stronger intra-similarities, and therefore achieve better clustering performance. Experimental results on real life datasets show that these value weighting based k-modes algorithms are superior to the standard k-modes algorithm with respect to clustering accuracy.",
"title": ""
},
{
"docid": "8a42bc2dec684cf087d19bbbd2e815f8",
"text": "Carefully managing the presentation of self via technology is a core practice on all modern social media platforms. Recently, selfies have emerged as a new, pervasive genre of identity performance. In many ways unique, selfies bring us fullcircle to Goffman—blending the online and offline selves together. In this paper, we take an empirical, Goffman-inspired look at the phenomenon of selfies. We report a large-scale, mixed-method analysis of the categories in which selfies appear on Instagram—an online community comprising over 400M people. Applying computer vision and network analysis techniques to 2.5M selfies, we present a typology of emergent selfie categories which represent emphasized identity statements. To the best of our knowledge, this is the first large-scale, empirical research on selfies. We conclude, contrary to common portrayals in the press, that selfies are really quite ordinary: they project identity signals such as wealth, health and physical attractiveness common to many online media, and to offline life.",
"title": ""
},
{
"docid": "81b82ae24327c7d5c0b0bf4a04904826",
"text": "AIM\nTo identify key predictors and moderators of mental health 'help-seeking behavior' in adolescents.\n\n\nBACKGROUND\nMental illness is highly prevalent in adolescents and young adults; however, individuals in this demographic group are among the least likely to seek help for such illnesses. Very little quantitative research has examined predictors of help-seeking behaviour in this demographic group.\n\n\nDESIGN\nA cross-sectional design was used.\n\n\nMETHODS\nA group of 180 volunteers between the ages of 17-25 completed a survey designed to measure hypothesized predictors and moderators of help-seeking behaviour. Predictors included a range of health beliefs, personality traits and attitudes. Data were collected in August 2010 and were analysed using two standard and three hierarchical multiple regression analyses.\n\n\nFINDINGS\nThe standard multiple regression analyses revealed that extraversion, perceived benefits of seeking help, perceived barriers to seeking help and social support were direct predictors of help-seeking behaviour. Tests of moderated relationships (using hierarchical multiple regression analyses) indicated that perceived benefits were more important than barriers in predicting help-seeking behaviour. In addition, perceived susceptibility did not predict help-seeking behaviour unless individuals were health conscious to begin with or they believed that they would benefit from help.\n\n\nCONCLUSION\nA range of personality traits, attitudes and health beliefs can predict help-seeking behaviour for mental health problems in adolescents. The variable 'Perceived Benefits' is of particular importance as it is: (1) a strong and robust predictor of help-seeking behaviour; and (2) a factor that can theoretically be modified based on health promotion programmes.",
"title": ""
},
{
"docid": "2f04cd1b83b2ec17c9930515e8b36b95",
"text": "Traditionally, visualization design assumes that the e↵ectiveness of visualizations is based on how much, and how clearly, data are presented. We argue that visualization requires a more nuanced perspective. Data are not ends in themselves, but means to an end (such as generating knowledge or assisting in decision-making). Focusing on the presentation of data per se can result in situations where these higher goals are ignored. This is especially the case for situations where cognitive or perceptual biases make the presentation of “just” the data as misleading as willful distortion. We argue that we need to de-sanctify data, and occasionally promote designs which distort or obscure data in service of understanding. We discuss examples of beneficial embellishment, distortion, and obfuscation in visualization, and argue that these examples are representative of a wider class of techniques for going beyond simplistic presentations of data.",
"title": ""
},
{
"docid": "ae94106e02e05a38aa50842d7978c2c0",
"text": "Fast and reliable face and facial feature detection are required abilities for any Human Computer Interaction approach based on Computer Vision. Since the publication of the Viola-Jones object detection framework and the more recent open source implementation, an increasing number of applications have appeared, particularly in the context of facial processing. In this respect, the OpenCV community shares a collection of public domain classifiers for this scenario. However, as far as we know these classifiers have never been evaluated and/or compared. In this paper we analyze the individual performance of all those public classifiers getting the best performance for each target. These results are valid to define a baseline for future approaches. Additionally we propose a simple hierarchical combination of those classifiers to increase the facial feature detection rate while reducing the face false detection rate.",
"title": ""
},
{
"docid": "db26de1462b3e8e53bf54846849ae2c2",
"text": "The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.",
"title": ""
},
{
"docid": "9ddddb7775122ed13544b37c70607507",
"text": "We present results from a multi-generational study of collocated group console gaming. We examine the intergenerational gaming practices of four generations of gamers, from ages 3 to 83 and, in particular, the roles that gamers of different generations take on when playing together in groups. Our findings highlight the extent to which existing gaming technologies are amenable to interactions within collocated intergenerational groups and the broader set of roles that have emerged in these computer-mediated interactions than have previously been documented by studies of more traditional collocated, intergenerational interactions. We articulate attributes of the games that encourage intergenerational interaction.",
"title": ""
},
{
"docid": "1c365e6256ae1c404c6f3f145eb04924",
"text": "Progress in signal processing continues to enable welcome advances in high-frequency (HF) radio performance and efficiency. The latest data waveforms use channels wider than 3 kHz to boost data throughput and robustness. This has driven the need for a more capable Automatic Link Establishment (ALE) system that links faster and adapts the wideband HF (WBHF) waveform to efficiently use available spectrum. In this paper, we investigate the possibility and advantages of using various non-scanning ALE techniques with the new wideband ALE (WALE) to further improve spectrum awareness and linking speed.",
"title": ""
},
{
"docid": "24bd1a178fde153c8ee8a4fa332611cf",
"text": "This paper proposes a comprehensive methodology for the design of a controllable electric vehicle charger capable of making the most of the interaction with an autonomous smart energy management system (EMS) in a residential setting. Autonomous EMSs aim achieving the potential benefits associated with energy exchanges between consumers and the grid, using bidirectional and power-controllable electric vehicle chargers. A suitable design for a controllable charger is presented, including the sizing of passive elements and controllers. This charger has been implemented using an experimental setup with a digital signal processor to validate its operation. The experimental results obtained foresee an adequate interaction between the proposed charger and a compatible autonomous EMS in a typical residential setting.",
"title": ""
},
{
"docid": "c773efb805899ee9e365b5f19ddb40bc",
"text": "In this paper, we overview the 2009 Simulated Car Racing Championship-an event comprising three competitions held in association with the 2009 IEEE Congress on Evolutionary Computation (CEC), the 2009 ACM Genetic and Evolutionary Computation Conference (GECCO), and the 2009 IEEE Symposium on Computational Intelligence and Games (CIG). First, we describe the competition regulations and the software framework. Then, the five best teams describe the methods of computational intelligence they used to develop their drivers and the lessons they learned from the participation in the championship. The organizers provide short summaries of the other competitors. Finally, we summarize the championship results, followed by a discussion about what the organizers learned about 1) the development of high-performing car racing controllers and 2) the organization of scientific competitions.",
"title": ""
},
{
"docid": "90b502cb72488529ec0d389ca99b57b8",
"text": "The cross-entropy loss commonly used in deep learning is closely related to the defining properties of optimal representations, but does not enforce some of the key properties. We show that this can be solved by adding a regularization term, which is in turn related to injecting multiplicative noise in the activations of a Deep Neural Network, a special case of which is the common practice of dropout. We show that our regularized loss function can be efficiently minimized using Information Dropout, a generalization of dropout rooted in information theoretic principles that automatically adapts to the data and can better exploit architectures of limited capacity. When the task is the reconstruction of the input, we show that our loss function yields a Variational Autoencoder as a special case, thus providing a link between representation learning, information theory and variational inference. Finally, we prove that we can promote the creation of optimal disentangled representations simply by enforcing a factorized prior, a fact that has been observed empirically in recent work. Our experiments validate the theoretical intuitions behind our method, and we find that Information Dropout achieves a comparable or better generalization performance than binary dropout, especially on smaller models, since it can automatically adapt the noise to the structure of the network, as well as to the test sample.",
"title": ""
},
{
"docid": "2ae96a524ba3b6c43ea6bfa112f71a30",
"text": "Accurate quantification of gluconeogenic flux following alcohol ingestion in overnight-fasted humans has yet to be reported. [2-13C1]glycerol, [U-13C6]glucose, [1-2H1]galactose, and acetaminophen were infused in normal men before and after the consumption of 48 g alcohol or a placebo to quantify gluconeogenesis, glycogenolysis, hepatic glucose production, and intrahepatic gluconeogenic precursor availability. Gluconeogenesis decreased 45% vs. the placebo (0.56 ± 0.05 to 0.44 ± 0.04 mg ⋅ kg-1 ⋅ min-1vs. 0.44 ± 0.05 to 0.63 ± 0.09 mg ⋅ kg-1 ⋅ min-1, respectively, P < 0.05) in the 5 h after alcohol ingestion, and total gluconeogenic flux was lower after alcohol compared with placebo. Glycogenolysis fell over time after both the alcohol and placebo cocktails, from 1.46-1.47 mg ⋅ kg-1 ⋅ min-1to 1.35 ± 0.17 mg ⋅ kg-1 ⋅ min-1(alcohol) and 1.26 ± 0.20 mg ⋅ kg-1 ⋅ min-1, respectively (placebo, P < 0.05 vs. baseline). Hepatic glucose output decreased 12% after alcohol consumption, from 2.03 ± 0.21 to 1.79 ± 0.21 mg ⋅ kg-1 ⋅ min-1( P < 0.05 vs. baseline), but did not change following the placebo. Estimated intrahepatic gluconeogenic precursor availability decreased 61% following alcohol consumption ( P < 0.05 vs. baseline) but was unchanged after the placebo ( P < 0.05 between treatments). We conclude from these results that gluconeogenesis is inhibited after alcohol consumption in overnight-fasted men, with a somewhat larger decrease in availability of gluconeogenic precursors but a smaller effect on glucose production and no effect on plasma glucose concentrations. Thus inhibition of flux into the gluconeogenic precursor pool is compensated by changes in glycogenolysis, the fate of triose-phosphates, and peripheral tissue utilization of plasma glucose.",
"title": ""
}
] | scidocsrr |
bac3f7c9d829ac0a042e0b35e95ff424 | Type-2 fuzzy logic systems for temperature evaluation in ladle furnace | [
{
"docid": "fdbca2e02ac52afd687331048ddee7d3",
"text": "Type-2 fuzzy sets let us model and minimize the effects of uncertainties in rule-base fuzzy logic systems. However, they are difficult to understand for a variety of reasons which we enunciate. In this paper, we strive to overcome the difficulties by: 1) establishing a small set of terms that let us easily communicate about type-2 fuzzy sets and also let us define such sets very precisely, 2) presenting a new representation for type-2 fuzzy sets, and 3) using this new representation to derive formulas for union, intersection and complement of type-2 fuzzy sets without having to use the Extension Principle.",
"title": ""
},
{
"docid": "c4ccb674a07ba15417f09b81c1255ba8",
"text": "Real world environments are characterized by high levels of linguistic and numerical uncertainties. A Fuzzy Logic System (FLS) is recognized as an adequate methodology to handle the uncertainties and imprecision available in real world environments and applications. Since the invention of fuzzy logic, it has been applied with great success to numerous real world applications such as washing machines, food processors, battery chargers, electrical vehicles, and several other domestic and industrial appliances. The first generation of FLSs were type-1 FLSs in which type-1 fuzzy sets were employed. Later, it was found that using type-2 FLSs can enable the handling of higher levels of uncertainties. Recent works have shown that interval type-2 FLSs can outperform type-1 FLSs in the applications which encompass high uncertainty levels. However, the majority of interval type-2 FLSs handle the linguistic and input numerical uncertainties using singleton interval type-2 FLSs that mix the numerical and linguistic uncertainties to be handled only by the linguistic labels type-2 fuzzy sets. This ignores the fact that if input numerical uncertainties were present, they should affect the incoming inputs to the FLS. Even in the papers that employed non-singleton type-2 FLSs, the input signals were assumed to have a predefined shape (mostly Gaussian or triangular) which might not reflect the real uncertainty distribution which can vary with the associated measurement. In this paper, we will present a new approach which is based on an adaptive non-singleton interval type-2 FLS where the numerical uncertainties will be modeled and handled by non-singleton type-2 fuzzy inputs and the linguistic uncertainties will be handled by interval type-2 fuzzy sets to represent the antecedents’ linguistic labels. The non-singleton type-2 fuzzy inputs are dynamic and they are automatically generated from data and they do not assume a specific shape about the distribution associated with the given sensor. We will present several real world experiments using a real world robot which will show how the proposed type-2 non-singleton type-2 FLS will produce a superior performance to its singleton type-1 and type-2 counterparts when encountering high levels of uncertainties.",
"title": ""
},
{
"docid": "20f43c14feaf2da1e8999403bf350855",
"text": "In this paper we propose a new approach to genetic optimization of modular neural networks with fuzzy response integration. The architecture of the modular neural network and the structure of the fuzzy system (for response integration) are designed using genetic algorithms. The proposed methodology is applied to the case of human recognition based on three biometric measures, namely iris, ear, and voice. Experimental results show that optimal modular neural networks can be designed with the use of genetic algorithms and as a consequence the recognition rates of such networks can be improved significantly. In the case of optimization of the fuzzy system for response integration, the genetic algorithm not only adjusts the number of membership functions and rules, but also allows the variation on the type of logic (type-1 or type-2) and the change in the inference model (switching to Mamdani model or Sugeno model). Another interesting finding of this work is that when human recognition is performed under noisy conditions, the response integrators of the modular networks constructed by the genetic algorithm are found to be optimal when using type-2 fuzzy logic. This could have been expected as there has been experimental evidence from previous works that type-2 fuzzy logic is better suited to model higher levels of uncertainty. 2012 Elsevier Inc. All rights reserved.",
"title": ""
}
] | [
{
"docid": "e3f4add37a083f61feda8805478d0729",
"text": "The evaluation of the effects of different media ionic strengths and pH on the release of hydrochlorothiazide, a poorly soluble drug, and diltiazem hydrochloride, a cationic and soluble drug, from a gel forming hydrophilic polymeric matrix was the objective of this study. The drug to polymer ratio of formulated tablets was 4:1. Hydrochlorothiazide or diltiazem HCl extended release (ER) matrices containing hypromellose (hydroxypropyl methylcellulose (HPMC)) were evaluated in media with a pH range of 1.2-7.5, using an automated USP type III, Bio-Dis dissolution apparatus. The ionic strength of the media was varied over a range of 0-0.4M to simulate the gastrointestinal fed and fasted states and various physiological pH conditions. Sodium chloride was used for ionic regulation due to its ability to salt out polymers in the midrange of the lyotropic series. The results showed that the ionic strength had a profound effect on the drug release from the diltiazem HCl K100LV matrices. The K4M, K15M and K100M tablets however withstood the effects of media ionic strength and showed a decrease in drug release to occur with an increase in ionic strength. For example, drug release after the 1h mark for the K100M matrices in water was 36%. Drug release in pH 1.2 after 1h was 30%. An increase of the pH 1.2 ionic strength to 0.4M saw a reduction of drug release to 26%. This was the general trend for the K4M and K15M matrices as well. The similarity factor f2 was calculated using drug release in water as a reference. Despite similarity occurring for all the diltiazem HCl matrices in the pH 1.2 media (f2=64-72), increases of ionic strength at 0.2M and 0.4M brought about dissimilarity. The hydrochlorothiazide tablet matrices showed similarity at all the ionic strength tested for all polymers (f2=56-81). The values of f2 however reduced with increasing ionic strengths. DSC hydration results explained the hydrochlorothiazide release from their HPMC matrices. There was an increase in bound water as ionic strengths increased. Texture analysis was employed to determine the gel strength and also to explain the drug release for the diltiazem hydrochloride. This methodology can be used as a valuable tool for predicting potential ionic effects related to in vivo fed and fasted states on drug release from hydrophilic ER matrices.",
"title": ""
},
{
"docid": "d9c514f3e1089f258732eef4a949fe55",
"text": "Shading is a tedious process for artists involved in 2D cartoon and manga production given the volume of contents that the artists have to prepare regularly over tight schedule. While we can automate shading production with the presence of geometry, it is impractical for artists to model the geometry for every single drawing. In this work, we aim to automate shading generation by analyzing the local shapes, connections, and spatial arrangement of wrinkle strokes in a clean line drawing. By this, artists can focus more on the design rather than the tedious manual editing work, and experiment with different shading effects under different conditions. To achieve this, we have made three key technical contributions. First, we model five perceptual cues by exploring relevant psychological principles to estimate the local depth profile around strokes. Second, we formulate stroke interpretation as a global optimization model that simultaneously balances different interpretations suggested by the perceptual cues and minimizes the interpretation discrepancy. Lastly, we develop a wrinkle-aware inflation method to generate a height field for the surface to support the shading region computation. In particular, we enable the generation of two commonly-used shading styles: 3D-like soft shading and manga-style flat shading.",
"title": ""
},
{
"docid": "2923ea4e17567b06b9d8e0e9f1650e55",
"text": "A new compact two-segments dielectric resonator antenna (TSDR) for ultrawideband (UWB) application is presented and studied. The design consists of a thin monopole printed antenna loaded with two dielectric resonators with different dielectric constant. By applying a combination of U-shaped feedline and modified TSDR, proper radiation characteristics are achieved. The proposed antenna provides an ultrawide impedance bandwidth, high radiation efficiency, and compact antenna with an overall size of 18 × 36 × 11 mm . From the measurement results, it is found that the realized dielectric resonator antenna with good radiation characteristics provides an ultrawide bandwidth of about 110%, covering a range from 3.14 to 10.9 GHz, which covers UWB application.",
"title": ""
},
{
"docid": "bcd47a79eeb49a34253d3c0de236f768",
"text": "This is the second of five papers in the child survival series. The first focused on continuing high rates of child mortality (over 10 million each year) from preventable causes: diarrhoea, pneumonia, measles, malaria, HIV/AIDS, the underlying cause of undernutrition, and a small group of causes leading to neonatal deaths. We review child survival interventions feasible for delivery at high coverage in low-income settings, and classify these as level 1 (sufficient evidence of effect), level 2 (limited evidence), or level 3 (inadequate evidence). Our results show that at least one level-1 intervention is available for preventing or treating each main cause of death among children younger than 5 years, apart from birth asphyxia, for which a level-2 intervention is available. There is also limited evidence for several other interventions. However, global coverage for most interventions is below 50%. If level 1 or 2 interventions were universally available, 63% of child deaths could be prevented. These findings show that the interventions needed to achieve the millennium development goal of reducing child mortality by two-thirds by 2015 are available, but that they are not being delivered to the mothers and children who need them.",
"title": ""
},
{
"docid": "8d104169f3862bc7c54d5932024ed9f6",
"text": "Integer optimization problems are concerned with the efficient allocation of limited resources to meet a desired objective when some of the resources in question can only be divided into discrete parts. In such cases, the divisibility constraints on these resources, which may be people, machines, or other discrete inputs, may restrict the possible alternatives to a finite set. Nevertheless, there are usually too many alternatives to make complete enumeration a viable option for instances of realistic size. For example, an airline may need to determine crew schedules that minimize the total operating cost; an automotive manufacturer may want to determine the optimal mix of models to produce in order to maximize profit; or a flexible manufacturing facility may want to schedule production for a plant without knowing precisely what parts will be needed in future periods. In today’s changing and competitive industrial environment, the difference between ad hoc planning methods and those that use sophisticated mathematical models to determine an optimal course of action can determine whether or not a company survives.",
"title": ""
},
{
"docid": "77e2aac8b42b0b9263278280d867cb40",
"text": "This paper explores the problem of breast tissue classification of microscopy images. Based on the predominant cancer type the goal is to classify images into four categories of normal, benign, in situ carcinoma, and invasive carcinoma. Given a suitable training dataset, we utilize deep learning techniques to address the classification problem. Due to the large size of each image in the training dataset, we propose a patch-based technique which consists of two consecutive convolutional neural networks. The first “patch-wise” network acts as an auto-encoder that extracts the most salient features of image patches while the second “image-wise” network performs classification of the whole image. The first network is pre-trained and aimed at extracting local information while the second network obtains global information of an input image. We trained the networks using the ICIAR 2018 grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method yields 95% accuracy on the validation set compared to previously reported 77% accuracy rates in the literature. Our code is publicly available at https://github.com/ImagingLab/ICIAR2018.",
"title": ""
},
{
"docid": "8c575ae46ac2969c19a841c7d9a8cb5a",
"text": "Constrained Local Models (CLMs) are a well-established family of methods for facial landmark detection. However, they have recently fallen out of favor to cascaded regressionbased approaches. This is in part due to the inability of existing CLM local detectors to model the very complex individual landmark appearance that is affected by expression, illumination, facial hair, makeup, and accessories. In our work, we present a novel local detector – Convolutional Experts Network (CEN) – that brings together the advantages of neural architectures and mixtures of experts in an end-toend framework. We further propose a Convolutional Experts Constrained Local Model (CE-CLM) algorithm that uses CEN as a local detector. We demonstrate that our proposed CE-CLM algorithm outperforms competitive state-of-the-art baselines for facial landmark detection by a large margin, especially on challenging profile images.",
"title": ""
},
{
"docid": "87cfc5cad31751fd89c68dc9557eb33f",
"text": "his paper presents a low-voltage (LV) (1.0 V) and low-power (LP) (40 μW) inverter based operational transconductance amplifier (OTA) using FGMOS (Floating-Gate MOS) transistor and its application in Gm-C filters. The OTA was designed in a 0.18 μm CMOS process. The simulation results of the proposed OTA demonstrate an open loop gain of 30.2 dB and a unity gain frequency of 942 MHz. In this OTA, the relative tuning range of 50 is achieved. To demonstrate the use of the proposed OTA in practical circuits, the second-order filter was designed. The designed filter has a good tuning range from 100 kHz to 5.6 MHz which is suitable for the wireless specifications of Bluetooth (650 kHz), CDMA2000 (700 kHz) and Wideband CDMA (2.2 MHz). The active area occupied by the designed filter on the silicon is and the maximum power consumption of this filter is 160 μW.",
"title": ""
},
{
"docid": "6018c84c0e5666b5b4615766a5bb98a9",
"text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.",
"title": ""
},
{
"docid": "0b0b313c16697e303522fef245d97ba8",
"text": "The development of novel targeted therapies with acceptable safety profiles is critical to successful cancer outcomes with better survival rates. Immunotherapy offers promising opportunities with the potential to induce sustained remissions in patients with refractory disease. Recent dramatic clinical responses in trials with gene modified T cells expressing chimeric antigen receptors (CARs) in B-cell malignancies have generated great enthusiasm. This therapy might pave the way for a potential paradigm shift in the way we treat refractory or relapsed cancers. CARs are genetically engineered receptors that combine the specific binding domains from a tumor targeting antibody with T cell signaling domains to allow specifically targeted antibody redirected T cell activation. Despite current successes in hematological cancers, we are only in the beginning of exploring the powerful potential of CAR redirected T cells in the control and elimination of resistant, metastatic, or recurrent nonhematological cancers. This review discusses the application of the CAR T cell therapy, its challenges, and strategies for successful clinical and commercial translation.",
"title": ""
},
{
"docid": "80a86ff7e26bb29cf919b22433f8b6b4",
"text": "Despite the widespread acceptance and use of pornography, much remains unknown about the heterogeneity among consumers of pornography. Using a sample of 457 college students from a midwestern university in the United States, a latent profile analysis was conducted to identify unique classifications of pornography users considering motivations of pornography use, level of pornography use, age of user, degree of pornography acceptance, and religiosity. Results indicated three classes of pornography users: Porn Abstainers (n 1⁄4 285), Auto-Erotic Porn Users (n 1⁄4 85), and Complex Porn Users (n 1⁄4 87). These three classes of pornography use are carefully defined. The odds of membership in these three unique classes of pornography users was significantly distinguished by relationship status, selfesteem, and gender. These results expand what is known about pornography users by providing a more person-centered approach that is more nuanced in understanding pornography use. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit",
"title": ""
},
{
"docid": "5c88fae140f343ae3002685ab96fd848",
"text": "Function recovery is a critical step in many binary analysis and instrumentation tasks. Existing approaches rely on commonly used function prologue patterns to recognize function starts, and possibly epilogues for the ends. However, this approach is not robust when dealing with different compilers, compiler versions, and compilation switches. Although machine learning techniques have been proposed, the possibility of errors still limits their adoption. In this work, we present a novel function recovery technique that is based on static analysis. Evaluations have shown that we can produce very accurate results that are applicable to a wider set of applications.",
"title": ""
},
{
"docid": "5c31ed81a9c8d6463ce93890e38ad7b5",
"text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.",
"title": ""
},
{
"docid": "1efeab8c3036ad5ec1b4dc63a857b392",
"text": "In this paper, we present a motion planning framework for a fully deployed autonomous unmanned aerial vehicle which integrates two sample-based motion planning techniques, Probabilistic Roadmaps and Rapidly Exploring Random Trees. Additionally, we incorporate dynamic reconfigurability into the framework by integrating the motion planners with the control kernel of the UAV in a novel manner with little modification to the original algorithms. The framework has been verified through simulation and in actual flight. Empirical results show that these techniques used with such a framework offer a surprisingly efficient method for dynamically reconfiguring a motion plan based on unforeseen contingencies which may arise during the execution of a plan. The framework is generic and can be used for additional platforms.",
"title": ""
},
{
"docid": "efe74721de3eda130957ce26435375a3",
"text": "Internet of Things (IoT) has been given a lot of emphasis since the 90s when it was first proposed as an idea of interconnecting different electronic devices through a variety of technologies. However, during the past decade IoT has rapidly been developed without appropriate consideration of the profound security goals and challenges involved. This study explores the security aims and goals of IoT and then provides a new classification of different types of attacks and countermeasures on security and privacy. It then discusses future security directions and challenges that need to be addressed to improve security concerns over such networks and aid in the wider adoption of IoT by masses.",
"title": ""
},
{
"docid": "a81e4b95dfaa7887f66066343506d35f",
"text": "The purpose of making a “biobetter” biologic is to improve on the salient characteristics of a known biologic for which there is, minimally, clinical proof of concept or, maximally, marketed product data. There already are several examples in which second-generation or biobetter biologics have been generated by improving the pharmacokinetic properties of an innovative drug, including Neulasta® [a PEGylated, longer-half-life version of Neupogen® (filgrastim)] and Aranesp® [a longer-half-life version of Epogen® (epoetin-α)]. This review describes the use of protein fusion technologies such as Fc fusion proteins, fusion to human serum albumin, fusion to carboxy-terminal peptide, and other polypeptide fusion approaches to make biobetter drugs with more desirable pharmacokinetic profiles.",
"title": ""
},
{
"docid": "d80fc668073878c476bdf3997b108978",
"text": "Automotive information services utilizing vehicle data are rapidly expanding. However, there is currently no data centric software architecture that takes into account the scale and complexity of data involving numerous sensors. To address this issue, the authors have developed an in-vehicle datastream management system for automotive embedded systems (eDSMS) as data centric software architecture. Providing the data stream functionalities to drivers and passengers are highly beneficial. This paper describes a vehicle embedded data stream processing platform for Android devices. The platform enables flexible query processing with a dataflow query language and extensible operator functions in the query language on the platform. The platform employs architecture independent of data stream schema in in-vehicle eDSMS to facilitate smoother Android application program development. This paper presents specifications and design of the query language and APIs of the platform, evaluate it, and discuss the results. Keywords—Android, automotive, data stream management system",
"title": ""
},
{
"docid": "d8fc5a8bc075343b2e70a9b441ecf6e5",
"text": "With the explosive increase in mobile apps, more and more threats migrate from traditional PC client to mobile device. Compared with traditional Win+Intel alliance in PC, Android+ARM alliance dominates in Mobile Internet, the apps replace the PC client software as the major target of malicious usage. In this paper, to improve the security status of current mobile apps, we propose a methodology to evaluate mobile apps based on cloud computing platform and data mining. We also present a prototype system named MobSafe to identify the mobile app’s virulence or benignancy. Compared with traditional method, such as permission pattern based method, MobSafe combines the dynamic and static analysis methods to comprehensively evaluate an Android app. In the implementation, we adopt Android Security Evaluation Framework (ASEF) and Static Android Analysis Framework (SAAF), the two representative dynamic and static analysis methods, to evaluate the Android apps and estimate the total time needed to evaluate all the apps stored in one mobile app market. Based on the real trace from a commercial mobile app market called AppChina, we can collect the statistics of the number of active Android apps, the average number apps installed in one Android device, and the expanding ratio of mobile apps. As mobile app market serves as the main line of defence against mobile malwares, our evaluation results show that it is practical to use cloud computing platform and data mining to verify all stored apps routinely to filter out malware apps from mobile app markets. As the future work, MobSafe can extensively use machine learning to conduct automotive forensic analysis of mobile apps based on the generated multifaceted data in this stage.",
"title": ""
},
{
"docid": "8c1e70cf4173f9fc48f36c3e94216f15",
"text": "Deep learning methods often require large annotated data sets to estimate their high numbers of parameters, which is not practical for many robotic domains. One way to migitate this issue is to transfer features learned on large datasets to related tasks. In this work, we describe the perception system developed for the entry of team NimbRo Picking into the Amazon Picking Challenge 2016. Object detection and semantic Segmentation methods are adapted to the domain, including incorporation of depth measurements. To avoid the need for large training datasets, we make use of pretrained models whenever possible, e.g. CNNs pretrained on ImageNet, and the whole DenseCap captioning pipeline pretrained on the Visual Genome Dataset. Our system performed well at the APC 2016 and reached second and third places for the stow and pick tasks, respectively.",
"title": ""
},
{
"docid": "1a8662362e51a8783795e4588f0462a8",
"text": "Human body exposure to radiofrequency electromagnetic waves emitted from smart meters was assessed using various exposure configurations. Specific energy absorption rate distributions were determined using three anatomically realistic human models. Each model was assigned with age- and frequency-dependent dielectric properties representing a collection of age groups. Generalized exposure conditions involving standing and sleeping postures were assessed for a home area network operating at 868 and 2,450 MHz. The smart meter antenna was fed with 1 W power input which is an overestimation of what real devices typically emit (15 mW max limit). The highest observed whole body specific energy absorption rate value was 1.87 mW kg-1 , within the child model at a distance of 15 cm from a 2,450 MHz device. The higher values were attributed to differences in dimension and dielectric properties within the model. Specific absorption rate (SAR) values were also estimated based on power density levels derived from electric field strength measurements made at various distances from smart meter devices. All the calculated SAR values were found to be very small in comparison to International Commission on Non-Ionizing Radiation Protection limits for public exposure. Bioelectromagnetics. 39:200-216, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
}
] | scidocsrr |
7ad2a261e3f57a43e48e1cc309174cfc | Degeneration in VAE: in the Light of Fisher Information Loss | [
{
"docid": "ff59e2a5aa984dec7805a4d9d55e69e5",
"text": "We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset.",
"title": ""
}
] | [
{
"docid": "03bd5c0e41aa5948a5545fa3fca75bc2",
"text": "In the application of lead-acid series batteries, the voltage imbalance of each battery should be considered. Therefore, additional balancer circuits must be integrated into the battery. An active battery balancing circuit with an auxiliary storage can employ a sequential battery imbalance detection algorithm by comparing the voltage of a battery and auxiliary storage. The system is being in balance if the battery voltage imbalance is less than 10mV/cell. In this paper, a new algorithm is proposed so that the battery voltage balancing time can be improved. The battery balancing system is based on the LTC3305 working principle. The simulation verifies that the proposed algorithm can achieve permitted battery voltage imbalance faster than that of the previous algorithm.",
"title": ""
},
{
"docid": "1dfe7a3e875436db76496931db34c7db",
"text": "Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.",
"title": ""
},
{
"docid": "4cd7f19d0413f9bab1a2cda5a5b7a9a4",
"text": "Web-based learning plays a vital role in the modern education system, where different technologies are being emerged to enhance this E-learning process. Therefore virtual and online laboratories are gaining popularity due to its easy implementation and accessibility worldwide. These types of virtual labs are useful where the setup of the actual laboratory is complicated due to several factors such as high machinery or hardware cost. This paper presents a very efficient method of building a model using JavaScript Web Graphics Library with HTML5 enabled and having controllable features inbuilt. This type of program is free from any web browser plug-ins or application and also server independent. Proprietary software has always been a bottleneck in the development of such platforms. This approach rules out this issue and can easily applicable. Here the framework has been discussed and neatly elaborated with an example of a simplified robot configuration.",
"title": ""
},
{
"docid": "97a7c48145d682a9ed45109d83c82a73",
"text": "We introduce a large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge. Our dataset complements similar datasets in that we focus on stories about everyday activities, such as going to the movies or working in the garden, and that the questions require commonsense knowledge, or more specifically, script knowledge, to be answered. We show that our mode of data collection via crowdsourcing results in a substantial amount of such inference questions. The dataset forms the basis of a shared task on commonsense and script knowledge organized at SemEval 2018 and provides challenging test cases for the broader natural language understanding community.",
"title": ""
},
{
"docid": "e60c295d02b87d4c88e159a3343e0dcb",
"text": "In 2163 personally interviewed female twins from a population-based registry, the pattern of age at onset and comorbidity of the simple phobias (animal and situational)--early onset and low rates of comorbidity--differed significantly from that of agoraphobia--later onset and high rates of comorbidity. Consistent with an inherited \"phobia proneness\" but not a \"social learning\" model of phobias, the familial aggregation of any phobia, agoraphobia, social phobia, and animal phobia appeared to result from genetic and not from familial-environmental factors, with estimates of heritability of liability ranging from 30% to 40%. The best-fitting multivariate genetic model indicated the existence of genetic and individual-specific environmental etiologic factors common to all four phobia subtypes and others specific for each of the individual subtypes. This model suggested that (1) environmental experiences that predisposed to all phobias were most important for agoraphobia and social phobia and relatively unimportant for the simple phobias, (2) environmental experiences that uniquely predisposed to only one phobia subtype had a major impact on simple phobias, had a modest impact on social phobia, and were unimportant for agoraphobia, and (3) genetic factors that predisposed to all phobias were most important for animal phobia and least important for agoraphobia. Simple phobias appear to arise from the joint effect of a modest genetic vulnerability and phobia-specific traumatic events in childhood, while agoraphobia and, to a somewhat lesser extent, social phobia result from the combined effect of a slightly stronger genetic influence and nonspecific environmental experiences.",
"title": ""
},
{
"docid": "d4cdea26217e90002a3c4522124872a2",
"text": "Recently, several methods for single image super-resolution(SISR) based on deep neural networks have obtained high performance with regard to reconstruction accuracy and computational performance. This paper details the methodology and results of the New Trends in Image Restoration and Enhancement (NTIRE) challenge. The task of this challenge is to restore rich details (high frequencies) in a high resolution image for a single low resolution input image based on a set of prior examples with low and corresponding high resolution images. The challenge has two tracks. We present a super-resolution (SR) method, which uses three losses assigned with different weights to be regarded as optimization target. Meanwhile, the residual blocks are also used for obtaining significant improvement in the evaluation. The final model consists of 9 weight layers with four residual blocks and reconstructs the low resolution image with three color channels simultaneously, which shows better performance on these two tracks and benchmark datasets.",
"title": ""
},
{
"docid": "e73de1e6f191fef625f75808d7fbfbb1",
"text": "Colon cancer is one of the most prevalent diseases across the world. Numerous epidemiological studies indicate that diets rich in fruit, such as berries, provide significant health benefits against several types of cancer, including colon cancer. The anticancer activities of berries are attributed to their high content of phytochemicals and to their relevant antioxidant properties. In vitro and in vivo studies have demonstrated that berries and their bioactive components exert therapeutic and preventive effects against colon cancer by the suppression of inflammation, oxidative stress, proliferation and angiogenesis, through the modulation of multiple signaling pathways such as NF-κB, Wnt/β-catenin, PI3K/AKT/PKB/mTOR, and ERK/MAPK. Based on the exciting outcomes of preclinical studies, a few berries have advanced to the clinical phase. A limited number of human studies have shown that consumption of berries can prevent colorectal cancer, especially in patients at high risk (familial adenopolyposis or aberrant crypt foci, and inflammatory bowel diseases). In this review, we aim to highlight the findings of berries and their bioactive compounds in colon cancer from in vitro and in vivo studies, both on animals and humans. Thus, this review could be a useful step towards the next phase of berry research in colon cancer.",
"title": ""
},
{
"docid": "8f24898cb21a259d9260b67202141d49",
"text": "PROBLEM\nHow can human contributions to accidents be reconstructed? Investigators can easily take the position a of retrospective outsider, looking back on a sequence of events that seems to lead to an inevitable outcome, and pointing out where people went wrong. This does not explain much, however, and may not help prevent recurrence.\n\n\nMETHOD AND RESULTS\nThis paper examines how investigators can reconstruct the role that people contribute to accidents in light of what has recently become known as the new view of human error. The commitment of the new view is to move controversial human assessments and actions back into the flow of events of which they were part and which helped bring them forth, to see why assessments and actions made sense to people at the time. The second half of the paper addresses one way in which investigators can begin to reconstruct people's unfolding mindsets.\n\n\nIMPACT ON INDUSTRY\nIn an era where a large portion of accidents are attributed to human error, it is critical to understand why people did what they did, rather than judging them for not doing what we now know they should have done. This paper helps investigators avoid the traps of hindsight by presenting a method with which investigators can begin to see how people's actions and assessments actually made sense at the time.",
"title": ""
},
{
"docid": "f9f26d8ff95aff0a361fcb321e57a779",
"text": "A novel algorithm for the detection of underwater man-made objects in forwardlooking sonar imagery is proposed. The algorithm takes advantage of the integral-image representation to quickly compute features, and progressively reduces the computational load by working on smaller portions of the image along the detection process phases. By adhering to the proposed scheme, real-time detection on sonar data onboard an autonomous vehicle is made possible. The proposed method does not require training data, as it dynamically takes into account environmental characteristics of the sensed sonar data. The proposed approach has been implemented and integrated into the software system of the Gemellina autonomous surface vehicle, and is able to run in real time. The validity of the proposed approach is demonstrated on real experiments carried out at sea with the Gemellina autonomous surface vehicle.",
"title": ""
},
{
"docid": "d0a6ca9838f8844077fdac61d1d75af1",
"text": "Depth-first search, as developed by Tarjan and coauthors, is a fundamental technique of efficient algorithm design for graphs [23]. This note presents depth-first search algorithms for two basic problems, strong and biconnected components. Previous algorithms either compute auxiliary quantities based on the depth-first search tree (e.g., LOWPOINT values) or require two passes. We present one-pass algorithms that only maintain a representation of the depth-first search path. This gives a simplified view of depth-first search without sacrificing efficiency. In greater detail, most depth-first search algorithms (e.g., [23,10,11]) compute so-called LOWPOINT values that are defined in terms of the depth-first search tree. Because of the success of this method LOWPOINT values have become almost synonymous with depth-first search. LOWPOINT values are regarded as crucial in the strong and biconnected component algorithms, e.g., [14, pp. 94, 514]. Tarjan’s LOWPOINT method for strong components is presented in texts [1, 7,14,16,17,21]. The strong component algorithm of Kosaraju and Sharir [22] is often viewed as conceptu-",
"title": ""
},
{
"docid": "e7e8fe5532d1cb32a7233bc4c99ac3b8",
"text": "The concept of network slicing opens the possibilities to address the complex requirements of multi-tenancy in 5G. To this end, SDN/NFV can act as technology enabler. This paper presents a centralised and dynamic approach for creating and provisioning network slices for virtual network operators' consumption to offer services to their end customers, focusing on an SDN wireless backhaul use case. We demonstrate our approach for dynamic end-to-end slice and service provisioning in a testbed.",
"title": ""
},
{
"docid": "f75b11bc21dc711b76a7a375c2a198d3",
"text": "In many application areas like e-science and data-warehousing detailed information about the origin of data is required. This kind of information is often referred to as data provenance or data lineage. The provenance of a data item includes information about the processes and source data items that lead to its creation and current representation. The diversity of data representation models and application domains has lead to a number of more or less formal definitions of provenance. Most of them are limited to a special application domain, data representation model or data processing facility. Not surprisingly, the associated implementations are also restricted to some application domain and depend on a special data model. In this paper we give a survey of data provenance models and prototypes, present a general categorization scheme for provenance models and use this categorization scheme to study the properties of the existing approaches. This categorization enables us to distinguish between different kinds of provenance information and could lead to a better understanding of provenance in general. Besides the categorization of provenance types, it is important to include the storage, transformation and query requirements for the different kinds of provenance information and application domains in our considerations. The analysis of existing approaches will assist us in revealing open research problems in the area of data provenance.",
"title": ""
},
{
"docid": "d1a4abaa57f978858edf0d7b7dc506ba",
"text": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.",
"title": ""
},
{
"docid": "9cf8a2f73a906f7dc22c2d4fbcf8fa6b",
"text": "In this paper the effect of spoilers on aerodynamic characteristics of an airfoil were observed by CFD.As the experimental airfoil NACA 2415 was choosen and spoiler was extended from five different positions based on the chord length C. Airfoil section is designed with a spoiler extended at an angle of 7 degree with the horizontal.The spoiler extends to 0.15C.The geometry of 2-D airfoil without spoiler and with spoiler was designed in GAMBIT.The numerical simulation was performed by ANS YS Fluent to observe the effect of spoiler position on the aerodynamic characteristics of this particular airfoil. The results obtained from the computational process were plotted on graph and the conceptual assumptions were verified as the lift is reduced and the drag is increased that obeys the basic function of a spoiler. Coefficient of drag. I. INTRODUCTION An airplane wing has a special shape called an airfoil. As a wing moves through air, the air is split and passes above and below the wing. The wing's upper surface is shaped so the air rushing over the top speeds up and stretches out. This decreases the air pressure above the wing. The air flowing below the wing moves in a straighter line, so its speed and air pressure remains the same. Since high air pressure always moves toward low air pressure, the air below the wing pushes upward toward the air above the wing. The wing is in the middle, and the whole wing is ―lifted‖. The faster an airplane moves, the more lift there is and when the force of lift is greater than the force of gravity, the airplane is able to fly. [1] A spoiler, sometimes called a lift dumper is a device intended to reduce lift in an aircraft. Spoilers are plates on the top surface of a wing which can be extended upward into the airflow and spoil it. By doing so, the spoiler creates a carefully controlled stall over the portion of the wing behind it, greatly reducing the lift of that wing section. Spoilers are designed to reduce lift also making considerable increase in drag. Spoilers increase drag and reduce lift on the wing. If raised on only one wing, they aid roll control, causing that wing to drop. If the spoilers rise symmetrically in flight, the aircraft can either be slowed in level flight or can descend rapidly without an increase in airspeed. When the …",
"title": ""
},
{
"docid": "9ce232e2a49652ee7fbfe24c6913d52a",
"text": "Anthropometric quantities are widely used in epidemiologic research as possible confounders, risk factors, or outcomes. 3D laser-based body scans (BS) allow evaluation of dozens of quantities in short time with minimal physical contact between observers and probands. The aim of this study was to compare BS with classical manual anthropometric (CA) assessments with respect to feasibility, reliability, and validity. We performed a study on 108 individuals with multiple measurements of BS and CA to estimate intra- and inter-rater reliabilities for both. We suggested BS equivalents of CA measurements and determined validity of BS considering CA the gold standard. Throughout the study, the overall concordance correlation coefficient (OCCC) was chosen as indicator of agreement. BS was slightly more time consuming but better accepted than CA. For CA, OCCCs for intra- and inter-rater reliability were greater than 0.8 for all nine quantities studied. For BS, 9 of 154 quantities showed reliabilities below 0.7. BS proxies for CA measurements showed good agreement (minimum OCCC > 0.77) after offset correction. Thigh length showed higher reliability in BS while upper arm length showed higher reliability in CA. Except for these issues, reliabilities of CA measurements and their BS equivalents were comparable.",
"title": ""
},
{
"docid": "56667d286f69f8429be951ccf5d61c24",
"text": "As the Internet of Things (IoT) is emerging as an attractive paradigm, a typical IoT architecture that U2IoT (Unit IoT and Ubiquitous IoT) model has been presented for the future IoT. Based on the U2IoT model, this paper proposes a cyber-physical-social based security architecture (IPM) to deal with Information, Physical, and Management security perspectives, and presents how the architectural abstractions support U2IoT model. In particular, 1) an information security model is established to describe the mapping relations among U2IoT, security layer, and security requirement, in which social layer and additional intelligence and compatibility properties are infused into IPM; 2) physical security referring to the external context and inherent infrastructure are inspired by artificial immune algorithms; 3) recommended security strategies are suggested for social management control. The proposed IPM combining the cyber world, physical world and human social provides constructive proposal towards the future IoT security and privacy protection.",
"title": ""
},
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "b4586447ef1536f23793651fcd9d71b8",
"text": "State monitoring is widely used for detecting critical events and abnormalities of distributed systems. As the scale of such systems grows and the degree of workload consolidation increases in Cloud data centers, node failures and performance interferences, especially transient ones, become the norm rather than the exception. Hence, distributed state monitoring tasks are often exposed to impaired communication caused by such dynamics on different nodes. Unfortunately, existing distributed state monitoring approaches are often designed under the assumption of always-online distributed monitoring nodes and reliable inter-node communication. As a result, these approaches often produce misleading results which in turn introduce various problems to Cloud users who rely on state monitoring results to perform automatic management tasks such as auto-scaling. This paper introduces a new state monitoring approach that tackles this challenge by exposing and handling communication dynamics such as message delay and loss in Cloud monitoring environments. Our approach delivers two distinct features. First, it quantitatively estimates the accuracy of monitoring results to capture uncertainties introduced by messaging dynamics. This feature helps users to distinguish trustworthy monitoring results from ones heavily deviated from the truth, yet significantly improves monitoring utility compared with simple techniques that invalidate all monitoring results generated with the presence of messaging dynamics. Second, our approach also adapts to non-transient messaging issues by reconfiguring distributed monitoring algorithms to minimize monitoring errors. Our experimental results show that, even under severe message loss and delay, our approach consistently improves monitoring accuracy, and when applied to Cloud application auto-scaling, outperforms existing state monitoring techniques in terms of the ability to correctly trigger dynamic provisioning.",
"title": ""
},
{
"docid": "c432a44e48e777a7a3316c1474f0aa12",
"text": "In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.",
"title": ""
},
{
"docid": "0151ad8176711618e6cd5b0e20abf0cb",
"text": "Skeleton-based action recognition has made great progress recently, but many problems still remain unsolved. For example, the representations of skeleton sequences captured by most of the previous methods lack spatial structure information and detailed temporal dynamics features. In this paper, we propose a novel model with spatial reasoning and temporal stack learning (SR-TSL) for skeleton-based action recognition, which consists of a spatial reasoning network (SRN) and a temporal stack learning network (TSLN). The SRN can capture the high-level spatial structural information within each frame by a residual graph neural network, while the TSLN can model the detailed temporal dynamics of skeleton sequences by a composition of multiple skip-clip LSTMs. During training, we propose a clip-based incremental loss to optimize the model. We perform extensive experiments on the SYSU 3D Human-Object Interaction dataset and NTU RGB+D dataset and verify the effectiveness of each network of our model. The comparison results illustrate that our approach achieves much better results than the state-of-the-art methods.",
"title": ""
}
] | scidocsrr |
2b87c4c7c558342c8daf9fbc3234cb48 | The particle swarm optimization algorithm: convergence analysis and parameter selection | [
{
"docid": "555ad116b9b285051084423e2807a0ba",
"text": "The performance of particle swarm optimization using an inertia weight is compared with performance using a constriction factor. Five benchmark functions are used for the comparison. It is concluded that the best approach is to use the constriction factor while limiting the maximum velocity Vmax to the dynamic range of the variable Xmax on each dimension. This approach provides performance on the benchmark functions superior to any other published results known by the authors. '",
"title": ""
}
] | [
{
"docid": "e198dab977ba3e97245ecdd07fd25690",
"text": "The majority of the human genome consists of non-coding regions that have been called junk DNA. However, recent studies have unveiled that these regions contain cis-regulatory elements, such as promoters, enhancers, silencers, insulators, etc. These regulatory elements can play crucial roles in controlling gene expressions in specific cell types, conditions, and developmental stages. Disruption to these regions could contribute to phenotype changes. Precisely identifying regulatory elements is key to deciphering the mechanisms underlying transcriptional regulation. Cis-regulatory events are complex processes that involve chromatin accessibility, transcription factor binding, DNA methylation, histone modifications, and the interactions between them. The development of next-generation sequencing techniques has allowed us to capture these genomic features in depth. Applied analysis of genome sequences for clinical genetics has increased the urgency for detecting these regions. However, the complexity of cis-regulatory events and the deluge of sequencing data require accurate and efficient computational approaches, in particular, machine learning techniques. In this review, we describe machine learning approaches for predicting transcription factor binding sites, enhancers, and promoters, primarily driven by next-generation sequencing data. Data sources are provided in order to facilitate testing of novel methods. The purpose of this review is to attract computational experts and data scientists to advance this field.",
"title": ""
},
{
"docid": "895da346d947feba89cb171accb3f142",
"text": "A six-phase six-step voltage-fed induction motor is presented. The inverter is a transistorized six-step voltage source inverter, while the motor is a modified standard three-phase squirrel-cage motor. The stator is rewound with two three-phase winding sets displaced from each other by 30 electrical degrees. A model for the system is developed to simulate the drive and predict its performance. The simulation results for steady-state conditions and experimental measurements show very good correlation. It is shown that this winding configuration results in the elimination of all air-gap flux time harmonics of the order (6v ±1, v = 1,3,5,...). Consequently, all rotor copper losses produced by these harmonics as well as all torque harmonics of the order (6v, v = 1,3,5,...) are eliminated. A comparison between-the measured instantaneous torque of both three-phase and six-phase six-step voltage-fed induction machines shows the advantage of the six-phase system over the three-phase system in eliminating the sixth harmonic dominant torque ripple.",
"title": ""
},
{
"docid": "c692dd35605c4af62429edef6b80c121",
"text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.",
"title": ""
},
{
"docid": "dd8222a589e824b5189194ab697f27d7",
"text": "Facial expression recognition has been investigated for many years, and there are two popular models: Action Units (AUs) and the Valence-Arousal space (V-A space) that have been widely used. However, most of the databases for estimating V-A intensity are captured in laboratory settings, and the benchmarks \"in-the-wild\" do not exist. Thus, the First Affect-In-The-Wild Challenge released a database for V-A estimation while the videos were captured in wild condition. In this paper, we propose an integrated deep learning framework for facial attribute recognition, AU detection, and V-A estimation. The key idea is to apply AUs to estimate the V-A intensity since both AUs and V-A space could be utilized to recognize some emotion categories. Besides, the AU detector is trained based on the convolutional neural network (CNN) for facial attribute recognition. In experiments, we will show the results of the above three tasks to verify the performances of our proposed network framework.",
"title": ""
},
{
"docid": "b98585e7ed4b34afb72f81aeae2ebdcc",
"text": "The capability of transcribing music audio into music notation is a fascinating example of human intelligence. It involves perception (analyzing complex auditory scenes), cognition (recognizing musical objects), knowledge representation (forming musical structures), and inference (testing alternative hypotheses). Automatic music transcription (AMT), i.e., the design of computational algorithms to convert acoustic music signals into some form of music notation, is a challenging task in signal processing and artificial intelligence. It comprises several subtasks, including multipitch estimation (MPE), onset and offset detection, instrument recognition, beat and rhythm tracking, interpretation of expressive timing and dynamics, and score typesetting.",
"title": ""
},
{
"docid": "acf4645478c28811d41755b0ed81fb39",
"text": "Make more knowledge even in less time every day. You may not always spend your time and money to go abroad and get the experience and knowledge by yourself. Reading is a good alternative to do in getting this desirable knowledge and experience. You may gain many things from experiencing directly, but of course it will spend much money. So here, by reading social network data analytics social network data analytics, you can take more advantages with limited budget.",
"title": ""
},
{
"docid": "eaad298fce83ade590a800d2318a2928",
"text": "Space vector modulation (SVM) is the best modulation technique to drive 3-phase load such as 3-phase induction motor. In this paper, the pulse width modulation strategy with SVM is analyzed in detail. The modulation strategy uses switching time calculator to calculate the timing of voltage vector applied to the three-phase balanced-load. The principle of the space vector modulation strategy is performed using Matlab/Simulink. The simulation result indicates that this algorithm is flexible and suitable to use for advance vector control. The strategy of the switching minimizes the distortion of load current as well as loss due to minimize number of commutations in the inverter.",
"title": ""
},
{
"docid": "10aca07789cf8e465443ac9813eef189",
"text": "INTRODUCTION\nThe faculty of Medicine, (FOM) Makerere University Kampala was started in 1924 and has been running a traditional curriculum for 79 years. A few years back it embarked on changing its curriculum from traditional to Problem Based Learning (PBL) and Community Based Education and Service (COBES) as well as early clinical exposure. This curriculum has been implemented since the academic year 2003/2004. The study was done to describe the steps taken to change and implement the curriculum at the Faculty of Medicine, Makerere University Kampala.\n\n\nOBJECTIVE\nTo describe the steps taken to change and implement the new curriculum at the Faculty of Medicine.\n\n\nMETHODS\nThe stages taken during the process were described and analysed.\n\n\nRESULTS\nThe following stages were recognized characterization of Uganda's health status, analysis of government policy, analysis of old curriculum, needs assessment, adoption of new model (SPICES), workshop/retreats for faculty sensitization, incremental development of programs by faculty, implementation of new curriculum.\n\n\nCONCLUSION\nThe FOM has successfully embarked on curriculum change. This has not been without challenges. However, challenges have been taken on and handled as they arose and this has led to the implementation of new curriculum. Problem based learning can be adopted even in a low resourced country like Uganda.",
"title": ""
},
{
"docid": "2f2c99ac066dd2875fcfa2dc42467757",
"text": "The popularity of wireless networks has increased in recent years and is becoming a common addition to LANs. In this paper we investigate a novel use for a wireless network based on the IEEE 802.11 standard: inferring the location of a wireless client from signal quality measures. Similar work has been limited to prototype systems that rely on nearest-neighbor techniques to infer location. In this paper, we describe Nibble, a Wi-Fi location service that uses Bayesian networks to infer the location of a device. We explain the general theory behind the system and how to use the system, along with describing our experiences at a university campus building and at a research lab. We also discuss how probabilistic modeling can be applied to a diverse range of applications that use sensor data.",
"title": ""
},
{
"docid": "9978f33847a09c651ccce68c3b88287f",
"text": "We propose a method for discovering the dependency relationships between the topics of documents shared in social networks using the latent social interactions, attempting to answer the question: given a seemingly new topic, from where does this topic evolve? In particular, we seek to discover the pair-wise probabilistic dependency in topics of documents which associate social actors from a latent social network, where these documents are being shared. By viewing the evolution of topics as a Markov chain, we estimate a Markov transition matrix of topics by leveraging social interactions and topic semantics. Metastable states in a Markov chain are applied to the clustering of topics. Applied to the CiteSeer dataset, a collection of documents in academia, we show the trends of research topics, how research topics are related and which are stable. We also show how certain social actors, authors, impact these topics and propose new ways for evaluating author impact.",
"title": ""
},
{
"docid": "261ef8b449727b615f8cd5bd458afa91",
"text": "Luck (2009) argues that gamers face a dilemma when it comes to performing certain virtual acts. Most gamers regularly commit acts of virtual murder, and take these acts to be morally permissible. They are permissible because unlike real murder, no one is harmed in performing them; their only victims are computer-controlled characters, and such characters are not moral patients. What Luck points out is that this justification equally applies to virtual pedophelia, but gamers intuitively think that such acts are not morally permissible. The result is a dilemma: either gamers must reject the intuition that virtual pedophelic acts are impermissible and so accept partaking in such acts, or they must reject the intuition that virtual murder acts are permissible, and so abstain from many (if not most) extant games. While the prevailing solution to this dilemma has been to try and find a morally relevant feature to distinguish the two cases, I argue that a different route should be pursued. It is neither the case that all acts of virtual murder are morally permissible, nor are all acts of virtual pedophelia impermissible. Our intuitions falter and produce this dilemma because they are not sensitive to the different contexts in which games present virtual acts.",
"title": ""
},
{
"docid": "024168795536bc141bb07af74486ef78",
"text": "Video-based person re-identification matches video clips of people across non-overlapping cameras. Most existing methods tackle this problem by encoding each video frame in its entirety and computing an aggregate representation across all frames. In practice, people are often partially occluded, which can corrupt the extracted features. Instead, we propose a new spatiotemporal attention model that automatically discovers a diverse set of distinctive body parts. This allows useful information to be extracted from all frames without succumbing to occlusions and misalignments. The network learns multiple spatial attention models and employs a diversity regularization term to ensure multiple models do not discover the same body part. Features extracted from local image regions are organized by spatial attention model and are combined using temporal attention. As a result, the network learns latent representations of the face, torso and other body parts using the best available image patches from the entire video sequence. Extensive evaluations on three datasets show that our framework outperforms the state-of-the-art approaches by large margins on multiple metrics.",
"title": ""
},
{
"docid": "139d9d5866a1e455af954b2299bdbcf6",
"text": "1 . I n t r o d u c t i o n Reasoning about knowledge and belief has long been an issue of concern in philosophy and artificial intelligence (cf. [Hil],[MH],[Mo]). Recently we have argued that reasoning about knowledge is also crucial in understanding and reasoning about protocols in distributed systems, since messages can be viewed as changing the state of knowledge of a system [HM]; knowledge also seems to be of v i tal importance in cryptography theory [Me] and database theory. In order to formally reason about knowledge, we need a good semantic model. Part of the difficulty in providing such a model is that there is no agreement on exactly what the properties of knowledge are or should * This author's work was supported in part by DARPA contract N00039-82-C-0250. be. For example, is it the case that you know what facts you know? Do you know what you don't know? Do you know only true things, or can something you \"know\" actually be false? Possible-worlds semantics provide a good formal tool for \"customizing\" a logic so that, by making minor changes in the semantics, we can capture different sets of axioms. The idea, first formalized by Hintikka [Hi l ] , is that in each state of the world, an agent (or knower or player: we use all these words interchangeably) has other states or worlds that he considers possible. An agent knows p exactly if p is true in all the worlds that he considers possible. As Kripke pointed out [Kr], by imposing various conditions on this possibil i ty relation, we can capture a number of interesting axioms. For example, if we require that the real world always be one of the possible worlds (which amounts to saying that the possibility relation is reflexive), then it follows that you can't know anything false. Similarly, we can show that if the relation is transitive, then you know what you know. If the relation is transitive and symmetric, then you also know what you don't know. (The one-knower models where the possibility relation is reflexive corresponds to the classical modal logic T, while the reflexive and transitive case corresponds to S4, and the reflexive, symmetric and transitive case corresponds to S5.) Once we have a general framework for modelling knowledge, a reasonable question to ask is how hard it is to reason about knowledge. In particular, how hard is it to decide if a given formula is valid or satisfiable? The answer to this question depends crucially on the choice of axioms. For example, in the oneknower case, Ladner [La] has shown that for T and S4 the problem of deciding satisfiability is complete in polynomial space, while for S5 it is NP-complete, J. Halpern and Y. Moses 481 and thus no harder than the satisf iabi l i ty problem for propos i t iona l logic. Our a im in th is paper is to reexamine the possiblewor lds f ramework for knowledge and belief w i t h four par t icu lar po ints of emphasis: (1) we show how general techniques for f inding decision procedures and complete ax iomat izat ions apply to models for knowledge and belief, (2) we show how sensitive the di f f icul ty of the decision procedure is to such issues as the choice of moda l operators and the ax iom system, (3) we discuss how not ions of common knowledge and impl ic i t knowl edge among a group of agents fit in to the possibleworlds f ramework, and, f inal ly, (4) we consider to what extent the possible-worlds approach is a viable one for model l ing knowledge and belief. We begin in Section 2 by reviewing possible-world semantics in deta i l , and prov ing tha t the many-knower versions of T, S4, and S5 do indeed capture some of the more common axiomatizat ions of knowledge. In Section 3 we t u r n to complexity-theoret ic issues. We review some standard not ions f rom complexi ty theory, and then reprove and extend Ladner's results to show tha t the decision procedures for the many-knower versions of T, S4, and S5 are a l l complete in po lynomia l space.* Th is suggests tha t for S5, reasoning about many agents' knowledge is qual i ta t ive ly harder than jus t reasoning about one agent's knowledge of the real wor ld and of his own knowledge. In Section 4 we t u rn our at tent ion to mod i fy ing the model so tha t i t can deal w i t h belief rather than knowledge, where one can believe something tha t is false. Th is turns out to be somewhat more compl i cated t han dropp ing the assumption of ref lexivi ty, but i t can s t i l l be done in the possible-worlds f ramework. Results about decision procedures and complete axiomat i i a t i ons for belief paral le l those for knowledge. In Section 5 we consider what happens when operators for common knowledge and implicit knowledge are added to the language. A group has common knowledge of a fact p exact ly when everyone knows tha t everyone knows tha t everyone knows ... tha t p is t rue. (Common knowledge is essentially wha t McCar thy 's \" f oo l \" knows; cf. [MSHI] . ) A group has i m p l ic i t knowledge of p i f, roughly speaking, when the agents poo l the i r knowledge together they can deduce p. (Note our usage of the not ion of \" imp l i c i t knowl edge\" here differs s l ight ly f rom the way it is used in [Lev2] and [FH].) As shown in [ H M l ] , common knowl edge is an essential state for reaching agreements and * A problem is said to be complete w i th respect to a complexity class if, roughly speaking, it is the hardest problem in that class (see Section 3 for more details). coordinating action. For very similar reasons, common knowledge also seems to play an important role in human understanding of speech acts (cf. [CM]). The notion of implicit knowledge arises when reasoning about what states of knowledge a group can attain through communication, and thus is also crucial when reasoning about the efficacy of speech acts and about communication protocols in distributed systems. It turns out that adding an implicit knowledge operator to the language does not substantially change the complexity of deciding the satisfiability of formulas in the language, but this is not the case for common knowledge. Using standard techniques from PDL (Propositional Dynamic Logic; cf. [FL],[Pr]), we can show that when we add common knowledge to the language, the satisfiability problem for the resulting logic (whether it is based on T, S4, or S5) is complete in deterministic exponential time, as long as there at least two knowers. Thus, adding a common knowledge operator renders the decision procedure qualitatively more complex. (Common knowledge does not seem to be of much interest in the in the case of one knower. In fact, in the case of S4 and S5, if there is only one knower, knowledge and common knowledge are identical.) We conclude in Section 6 with some discussion of the appropriateness of the possible-worlds approach for capturing knowledge and belief, particularly in light of our results on computational complexity. Detailed proofs of the theorems stated here, as well as further discussion of these results, can be found in the ful l paper ([HM2]). 482 J. Halpern and Y. Moses 2.2 Possib le-wor lds semant ics: Following Hintikka [H i l ] , Sato [Sa], Moore [Mo], and others, we use a posaible-worlds semantics to model knowledge. This provides us wi th a general framework for our semantical investigations of knowledge and belief. (Everything we say about \"knowledge* in this subsection applies equally well to belief.) The essential idea behind possible-worlds semantics is that an agent's state of knowledge corresponds to the extent to which he can determine what world he is in. In a given world, we can associate wi th each agent the set of worlds that, according to the agent's knowledge, could possibly be the real world. An agent is then said to know a fact p exactly if p is true in all the worlds in this set; he does not know p if there is at least one world that he considers possible where p does not hold. * We discuss the ramifications of this point in Section 6. ** The name K (m) is inspired by the fact that for one knower, the system reduces to the well-known modal logic K. J. Halpern and Y. Moses 483 484 J. Halpern and Y. Moses that can be said is that we are modelling a rather idealised reaaoner, who knows all tautologies and all the logical consequences of his knowledge. If we take the classical interpretation of knowledge as true, justified belief, then an axiom such as A3 seems to be necessary. On the other hand, philosophers have shown that axiom A5 does not hold wi th respect to this interpretation ([Len]). However, the S5 axioms do capture an interesting interpretation of knowledge appropriate for reasoning about distributed systems (see [HM1] and Section 6). We continue here wi th our investigation of all these logics, deferring further comments on their appropriateness to Section 6. Theorem 3 implies that the provable formulas of K (m) correspond precisely to the formulas that are valid for Kripke worlds. As Kripke showed [Kr], there are simple conditions that we can impose on the possibility relations Pi so that the valid formulas of the resulting worlds are exactly the provable formulas of T ( m ) , S4 (m) , and S5(m) respectively. We wi l l try to motivate these conditions, but first we need a few definitions. * Since Lemma 4(b) says that a relation that is both reflexive and Euclidean must also be transitive, the reader may auspect that axiom A4 ia redundant in S5. Thia indeed ia the caae. J. Halpern and Y. Moses 485 486 J. Halpern and Y. Moses",
"title": ""
},
{
"docid": "9dac75a40e421163c4e05cfd5d36361f",
"text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.",
"title": ""
},
{
"docid": "b3c9d10efd071659336a1521ce0f8465",
"text": "The traditional diet in Okinawa is anchored by root vegetables (principally sweet potatoes), green and yellow vegetables, soybean-based foods, and medicinal plants. Marine foods, lean meats, fruit, medicinal garnishes and spices, tea, alcohol are also moderately consumed. Many characteristics of the traditional Okinawan diet are shared with other healthy dietary patterns, including the traditional Mediterranean diet, DASH diet, and Portfolio diet. All these dietary patterns are associated with reduced risk for cardiovascular disease, among other age-associated diseases. Overall, the important shared features of these healthy dietary patterns include: high intake of unrefined carbohydrates, moderate protein intake with emphasis on vegetables/legumes, fish, and lean meats as sources, and a healthy fat profile (higher in mono/polyunsaturated fats, lower in saturated fat; rich in omega-3). The healthy fat intake is likely one mechanism for reducing inflammation, optimizing cholesterol, and other risk factors. Additionally, the lower caloric density of plant-rich diets results in lower caloric intake with concomitant high intake of phytonutrients and antioxidants. Other shared features include low glycemic load, less inflammation and oxidative stress, and potential modulation of aging-related biological pathways. This may reduce risk for chronic age-associated diseases and promote healthy aging and longevity.",
"title": ""
},
{
"docid": "94ec2b6c24cbbbb8a648bd83873aa0c5",
"text": "s since January 1975, a full-text search capacity, and a personal archive for saving articles and search results of interest. All articles can be printed in a format that is virtually identical to that of the typeset pages. Beginning six months after publication, the full text of all Original Articles and Special Articles is available free to nonsubscribers who have completed a brief registration. Copyright © 2003 Massachusetts Medical Society. All rights reserved. Downloaded from www.nejm.org at UNIV OF CINCINNATI SERIALS DEPT on August 8, 2007 .",
"title": ""
},
{
"docid": "4185d65971d7345afbd7189368ed9303",
"text": "Ticket annotation and search has become an essential research subject for the successful delivery of IT operational analytics. Millions of tickets are created yearly to address business users' IT related problems. In IT service desk management, it is critical to first capture the pain points for a group of tickets to determine root cause; secondly, to obtain the respective distributions in order to layout the priority of addressing these pain points. An advanced ticket analytics system utilizes a combination of topic modeling, clustering and Information Retrieval (IR) technologies to address the above issues and the corresponding architecture which integrates of these features will allow for a wider distribution of this technology and progress to a significant financial benefit for the system owner. Topic modeling has been used to extract topics from given documents; in general, each topic is represented by a unigram language model. However, it is not clear how to interpret the results in an easily readable/understandable way until now. Due to the inefficiency to render top concepts using existing techniques, in this paper, we propose a probabilistic framework, which consists of language modeling (especially the topic models), Part-Of-Speech (POS) tags, query expansion, retrieval modeling and so on for the practical challenge. The rigorously empirical experiments demonstrate the consistent and utility performance of the proposed method on real datasets.",
"title": ""
},
{
"docid": "d22e8f2029e114b0c648a2cdfba4978a",
"text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.",
"title": ""
},
{
"docid": "cf6a7252039826211635cc9221f1db66",
"text": "Blockchain technologies are gaining massive momentum in the last few years. Blockchains are distributed ledgers that enable parties who do not fully trust each other to maintain a set of global states. The parties agree on the existence, values, and histories of the states. As the technology landscape is expanding rapidly, it is both important and challenging to have a firm grasp of what the core technologies have to offer, especially with respect to their data processing capabilities. In this paper, we first survey the state of the art, focusing on private blockchains (in which parties are authenticated). We analyze both in-production and research systems in four dimensions: distributed ledger, cryptography, consensus protocol, and smart contract. We then present BLOCKBENCH, a benchmarking framework for understanding performance of private blockchains against data processing workloads. We conduct a comprehensive evaluation of three major blockchain systems based on BLOCKBENCH, namely Ethereum, Parity, and Hyperledger Fabric. The results demonstrate several trade-offs in the design space, as well as big performance gaps between blockchain and database systems. Drawing from design principles of database systems, we discuss several research directions for bringing blockchain performance closer to the realm of databases.",
"title": ""
}
] | scidocsrr |
7727c17c6bb7423759ec4ff377681fb4 | Facial Expression Recognition using Convolutional Neural Networks: State of the Art | [
{
"docid": "dfacd79df58a78433672f06fdb10e5a2",
"text": "“Frontalization” is the process of synthesizing frontal facing views of faces appearing in single unconstrained photos. Recent reports have suggested that this process may substantially boost the performance of face recognition systems. This, by transforming the challenging problem of recognizing faces viewed from unconstrained viewpoints to the easier problem of recognizing faces in constrained, forward facing poses. Previous frontalization methods did this by attempting to approximate 3D facial shapes for each query image. We observe that 3D face shape estimation from unconstrained photos may be a harder problem than frontalization and can potentially introduce facial misalignments. Instead, we explore the simpler approach of using a single, unmodified, 3D surface as an approximation to the shape of all input faces. We show that this leads to a straightforward, efficient and easy to implement method for frontalization. More importantly, it produces aesthetic new frontal views and is surprisingly effective when used for face recognition and gender estimation.",
"title": ""
}
] | [
{
"docid": "e510140bfc93089e69cb762b968de5e9",
"text": "Owing to the popularity of the PDF format and the continued exploitation of Adobe Reader, the detection of malicious PDFs remains a concern. All existing detection techniques rely on the PDF parser to a certain extent, while the complexity of the PDF format leaves an abundant space for parser confusion. To quantify the difference between these parsers and Adobe Reader, we create a reference JavaScript extractor by directly tapping into Adobe Reader at locations identified through a mostly automatic binary analysis technique. By comparing the output of this reference extractor against that of several opensource JavaScript extractors on a large data set obtained from VirusTotal, we are able to identify hundreds of samples which existing extractors fail to extract JavaScript from. By analyzing these samples we are able to identify several weaknesses in each of these extractors. Based on these lessons, we apply several obfuscations on a malicious PDF sample, which can successfully evade all the malware detectors tested. We call this evasion technique a PDF parser confusion attack. Lastly, we demonstrate that the reference JavaScript extractor improves the accuracy of existing JavaScript-based classifiers and how it can be used to mitigate these parser limitations in a real-world setting.",
"title": ""
},
{
"docid": "751843f6085ba854dc75d9a6828bed13",
"text": "With the developments in information technology and improvements in communication channels, fraud is spreading all over the world, resulting in huge financial losses. Though fraud prevention mechanisms such as CHIP&PIN are developed, these mechanisms do not prevent the most common fraud types such as fraudulent credit card usages over virtual POS terminals through Internet or mail orders. As a result, fraud detection is the essential tool and probably the best way to stop such fraud types. In this study, classification models based on Artificial Neural Networks (ANN) and Logistic Regression (LR) are developed and applied on credit card fraud detection problem. This study is one of the firsts to compare the performance of ANN and LR methods in credit card fraud detection with a real data set.",
"title": ""
},
{
"docid": "8f660dd12e7936a556322f248a9e2a2a",
"text": "We develop and apply statistical topic models to software as a means of extracting concepts from source code. The effectiveness of the technique is demonstrated on 1,555 projects from SourceForge and Apache consisting of 113,000 files and 19 million lines of code. In addition to providing an automated, unsupervised, solution to the problem of summarizing program functionality, the approach provides a probabilistic framework with which to analyze and visualize source file similarity. Finally, we introduce an information-theoretic approach for computing tangling and scattering of extracted concepts, and present preliminary results",
"title": ""
},
{
"docid": "d2694577861e75535e59e316bd6a9015",
"text": "Despite being a new term, ‘fake news’ has evolved rapidly. This paper argues that it should be reserved for cases of deliberate presentation of (typically) false or misleading claims as news, where these are misleading by design. The phrase ‘by design’ here refers to systemic features of the design of the sources and channels by which fake news propagates and, thereby, manipulates the audience’s cognitive processes. This prospective definition is then tested: first, by contrasting fake news with other forms of public disinformation; second, by considering whether it helps pinpoint conditions for the (recent) proliferation of fake news. Résumé: En dépit de son utilisation récente, l’expression «fausses nouvelles» a évolué rapidement. Cet article soutient qu'elle devrait être réservée aux présentations intentionnelles d’allégations (typiquement) fausses ou trompeuses comme si elles étaient des nouvelles véridiques et où elles sont faussées à dessein. L'expression «à dessein» fait ici référence à des caractéristiques systémiques de la conception des sources et des canaux par lesquels les fausses nouvelles se propagent et par conséquent, manipulent les processus cognitifs du public. Cette définition prospective est ensuite mise à l’épreuve: d'abord, en opposant les fausses nouvelles à d'autres formes de désinformation publique; deuxièmement, en examinant si elle aide à cerner les conditions de la prolifération (récente) de fausses nou-",
"title": ""
},
{
"docid": "627587e2503a2555846efb5f0bca833b",
"text": "Image generation has been successfully cast as an autoregressive sequence generation or transformation problem. Recent work has shown that self-attention is an effective way of modeling textual sequences. In this work, we generalize a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood. By restricting the selfattention mechanism to attend to local neighborhoods we significantly increase the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks. While conceptually simple, our generative models significantly outperform the current state of the art in image generation on ImageNet, improving the best published negative log-likelihood on ImageNet from 3.83 to 3.77. We also present results on image super-resolution with a large magnification ratio, applying an encoder-decoder configuration of our architecture. In a human evaluation study, we find that images generated by our super-resolution model fool human observers three times more often than the previous state of the art.",
"title": ""
},
{
"docid": "6db737f9042631ddda9bae7c89b00701",
"text": "A self-assessment of time management is developed for middle-school students. A sample of entering seventh-graders (N = 814) from five states across the USA completed this instrument, with 340 students retested 6 months later. Exploratory and confirmatory factor analysis suggested two factors (i.e., Meeting Deadlines and Planning) that adequately explain the variance in time management for this age group. Scales show evidence of reliability and validity; with high internal consistency, reasonable consistency of factor structure over time, moderate to high correlations with Conscientiousness, low correlations with the remaining four personality dimensions of the Big Five, and reasonable prediction of students’ grades. Females score significantly higher on both factors of time management, with gender differences in Meeting Deadlines (but not Planning) mediated by Conscientiousness. Potential applications of the instrument for evaluation, diagnosis, and remediation in educational settings are discussed. 2009 Elsevier Ltd. All rights reserved. 1. The assessment of time management in middle-school students In our technologically enriched society, individuals are constantly required to multitask, prioritize, and work against deadlines in a timely fashion (Orlikowsky & Yates, 2002). Time management has caught the attention of educational researchers, industrial organizational psychologists, and entrepreneurs, for its possible impact on academic achievement, job performance, and quality of life (Macan, 1994). However, research on time management has not kept pace with this enthusiasm, with extant investigations suffering from a number of problems. Claessens, Van Eerde, Rutte, and Roe’s (2007) review of the literature suggest that there are three major limitations to research on time management. First, many measures of time management have limited validity evidence. Second, many studies rely solely on one-shot self-report assessment, such that evidence for a scale’s generalizability over time cannot be collected. Third, school (i.e., K-12) populations have largely been ignored. For example, all studies in the Claessens et al. (2007) review focus on adult workplace samples (e.g., teachers, engineers) or university students, rather than students in K-12. The current study involves the development of a time management assessment tailored specifically to middle-school students (i.e., adolescents in the sixth to eighth grade of schooling). Time management may be particularly important at the onset of adolescence for three reasons. First, the possibility of early identification and remediation of poor time management practices. Second, the transition into secondary education, from a learning environment involving one teacher to one of time-tabled classes for different subjects with different teachers setting assignments and tests that may occur contiguously. Successfully navigating this new learning environment requires the development of time management skills. Third, adolescents use large amounts of their discretionary time on television, computer gaming, internet use, and sports: Average estimates are 3=4 and 2=4 h per day for seventh-grade boys and girls, respectively (Van den Bulck, 2004). With less time left to do more administratively complex schoolwork, adolescents clearly require time management skills to succeed academically. 1.1. Definitions and assessments of time management Time management has been defined and operationalized in several different ways: As a means for monitoring and controlling time, as setting goals in life and keeping track of time use, as prioritizing goals and generating tasks from the goals, and as the perception of a more structured and purposive life (e.g., Bond & Feather, 1988; Britton & Tesser, 1991; Burt & Kemp, 1994; Eilam & Aharon, 2003). The various definitions all converge on the same essential element: The completion of tasks within an expected timeframe while maintaining outcome quality, through mechanisms such as planning, organizing, prioritizing, or multitasking. To the same effect, Claessens et al. (2007) defined time management as ‘‘behaviors that aim at achieving an effective use of time while performing certain goal-directed activities” (p. 36). Four instruments have been used to assess time management in adults: The Time Management Behavior Scale (TMBS; 0191-8869/$ see front matter 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.paid.2009.02.018 * Corresponding author. Tel.: +1 609 734 1049. E-mail address: [email protected] (O.L. Liu). Personality and Individual Differences 47 (2009) 174–179",
"title": ""
},
{
"docid": "9f7099655f70ff203c16802903e6acdc",
"text": "Segmentation of the liver from abdominal computed tomography (CT) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in CT scans. The proposed method consists of two main steps: (i) simultaneously liver detection and probabilistic segmentation using 3D convolutional neural network; (ii) accuracy refinement of the initial segmentation with graph cut and the previously learned probability map. The proposed approach was validated on forty CT volumes taken from two public databases MICCAI-Sliver07 and 3Dircadb1. For the MICCAI-Sliver07 test dataset, the calculated mean ratios of volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root-mean-square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSD) are 5.9, 2.7 %, 0.91, 1.88 and 18.94 mm, respectively. For the 3Dircadb1 dataset, the calculated mean ratios of VOE, RVD, ASD, RMSD and MSD are 9.36, 0.97 %, 1.89, 4.15 and 33.14 mm, respectively. The proposed method is fully automatic without any user interaction. Quantitative results reveal that the proposed approach is efficient and accurate for hepatic volume estimation in a clinical setup. The high correlation between the automatic and manual references shows that the proposed method can be good enough to replace the time-consuming and nonreproducible manual segmentation method.",
"title": ""
},
{
"docid": "c548cbfc3b1630392acd504b6e854c03",
"text": "Much of capital market research in accounting over the past 20 years has assumed that the price adjustment process to information is instantaneous and/or trivial. This assumption has had an enormous influence on the way we select research topics, design empirical tests, and interpret research findings. In this discussion, I argue that price discovery is a complex process, deserving of more attention. I highlight significant problems associated with a na.ıve view of market efficiency, and advocate a more general model involving noise traders. Finally, I discuss the implications of recent evidence against market efficiency for future research. r 2001 Elsevier Science B.V. All rights reserved. JEL classification: M4; G0; B2; D8",
"title": ""
},
{
"docid": "6639c05f14e220f4555c664b0c7b0466",
"text": "Previous attempts for data augmentation are designed manually, and the augmentation policies are dataset-specific. Recently, an automatic data augmentation approach, named AutoAugment, is proposed using reinforcement learning. AutoAugment searches for the augmentation polices in the discrete search space, which may lead to a sub-optimal solution. In this paper, we employ the Augmented Random Search method (ARS) to improve the performance of AutoAugment. Our key contribution is to change the discrete search space to continuous space, which will improve the searching performance and maintain the diversities between sub-policies. With the proposed method, state-of-the-art accuracies are achieved on CIFAR-10, CIFAR-100, and ImageNet (without additional data). Our code is available at https://github.com/gmy2013/ARS-Aug.",
"title": ""
},
{
"docid": "2781df07db142da8eefbe714631a59b2",
"text": "Snapchat is a social media platform that allows users to send images, videos, and text with a specified amount of time for the receiver(s) to view the content before it becomes permanently inaccessible to the receiver. Using focus group methodology and in-depth interviews, the current study sought to understand young adult (18e23 years old; n 1⁄4 34) perceptions of how Snapchat behaviors influenced their interpersonal relationships (family, friends, and romantic). Young adults indicated that Snapchat served as a double-edged swordda communication modality that could lead to relational challenges, but also facilitate more congruent communication within young adult interpersonal relationships. © 2016 Elsevier Ltd. All rights reserved. Technology is now a regular part of contemporary young adult (18e25 years old) life (Coyne, Padilla-Walker, & Howard, 2013; Vaterlaus, Jones, Patten, & Cook, 2015). With technological convergence (i.e. accessibility of multiple media on one device; Brown & Bobkowski, 2011) young adults can access both entertainment media (e.g., television, music) and social media (e.g., social networking, text messaging) on a single device. Among adults, smartphone ownership is highest among young adults (85% of 18e29 year olds; Smith, 2015). Perrin (2015) reported that 90% of young adults (ages 18e29) use social media. Facebook remains the most popular social networking platform, but several new social media apps (i.e., applications) have begun to gain popularity among young adults (e.g., Twitter, Instagram, Pinterest; Duggan, Ellison, Lampe, Lenhart, & Madden, 2015). Considering the high frequency of social media use, Subrahmanyam and Greenfield (2008) have advocated for more research on how these technologies influence interpersonal relationships. The current exploratory study aterlaus), Kathryn_barnett@ (C. Roche), youngja2@unk. was designed to understand the perceived role of Snapchat (see www.snapchat.com) in young adults' interpersonal relationships (i.e. family, social, and romantic). 1. Theoretical framework Uses and Gratifications Theory (U&G) purports that media and technology users are active, self-aware, and goal directed (Katz, Blumler, & Gurevitch, 1973). Technology consumers link their need gratification with specific technology options, which puts different technology sources in competition with one another to satisfy a consumer's needs. Since the emergence of U&G nearly 80 years ago, there have been significant advances in media and technology, which have resulted in many more media and technology options for consumers (Ruggiero, 2000). Writing about the internet and U&G in 2000, Roggiero forecasted: “If the internet is a technology that many predict will be genuinely transformative, it will lead to profound changes in media users' personal and social habits and roles” (p.28). Advances in accessibility to the internet and the development of social media, including Snapchat, provide support for the validity of this prediction. Despite the advances in technology, the needs users seek to gratify are likely more consistent over time. Supporting this point Katz, Gurevitch, and Haas J.M. Vaterlaus et al. / Computers in Human Behavior 62 (2016) 594e601 595",
"title": ""
},
{
"docid": "88abea475884eeec1049a573d107c6c9",
"text": "This paper extends the traditional pinhole camera projection geometry used in computer graphics to a more realistic camera model which approximates the effects of a lens and an aperture function of an actual camera. This model allows the generation of synthetic images which have a depth of field and can be focused on an arbitrary plane; it also permits selective modeling of certain optical characteristics of a lens. The model can be expanded to include motion blur and special-effect filters. These capabilities provide additional tools for highlighting important areas of a scene and for portraying certain physical characteristics of an object in an image.",
"title": ""
},
{
"docid": "14c278147defd19feb4e18d31a3fdcfb",
"text": "Efficient provisioning of resources is a challenging problem in cloud computing environments due to its dynamic nature and the need for supporting heterogeneous applications with different performance requirements. Currently, cloud datacenter providers either do not offer any performance guarantee or prefer static VM allocation over dynamic, which lead to inefficient utilization of resources. Earlier solutions, concentrating on a single type of SLAs (Service Level Agreements) or resource usage patterns of applications, are not suitable for cloud computing environments. In this paper, we tackle the resource allocation problem within a datacenter that runs different type of application workloads, particularly non-interactive and transactional applications. We propose admission control and scheduling mechanism which not only maximizes the resource utilization and profit, but also ensures the SLA requirements of users. In our experimental study, the proposed mechanism has shown to provide substantial improvement over static server consolidation and reduces SLA Violations.",
"title": ""
},
{
"docid": "a9baecb9470242c305942f7bc98494ab",
"text": "This paper summaries the state-of-the-art of image quality assessment (IQA) and human visual system (HVS). IQA provides an objective index or real value to measure the quality of the specified image. Since human beings are the ultimate receivers of visual information in practical applications, the most reliable IQA is to build a computational model to mimic the HVS. According to the properties and cognitive mechanism of the HVS, the available HVS-based IQA methods can be divided into two categories, i.e., bionics methods and engineering methods. This paper briefly introduces the basic theories and development histories of the above two kinds of HVS-based IQA methods. Finally, some promising research issues are pointed out in the end of the paper.",
"title": ""
},
{
"docid": "1a5c009f059ea28fd2d692d1de4eb913",
"text": "We present CROSSGRAD, a method to use multi-domain training data to learn a classifier that generalizes to new domains. CROSSGRAD does not need an adaptation phase via labeled or unlabeled data, or domain features in the new domain. Most existing domain adaptation methods attempt to erase domain signals using techniques like domain adversarial training. In contrast, CROSSGRAD is free to use domain signals for predicting labels, if it can prevent overfitting on training domains. We conceptualize the task in a Bayesian setting, in which a sampling step is implemented as data augmentation, based on domain-guided perturbations of input instances. CROSSGRAD parallelly trains a label and a domain classifier on examples perturbed by loss gradients of each other’s objectives. This enables us to directly perturb inputs, without separating and re-mixing domain signals while making various distributional assumptions. Empirical evaluation on three different applications where this setting is natural establishes that (1) domain-guided perturbation provides consistently better generalization to unseen domains, compared to generic instance perturbation methods, and that (2) data augmentation is a more stable and accurate method than domain adversarial training.",
"title": ""
},
{
"docid": "39bf7e3a8e75353a3025e2c0f18768f9",
"text": "Ligament reconstruction is the current standard of care for active patients with an anterior cruciate ligament (ACL) rupture. Although the majority of ACL reconstruction (ACLR) surgeries successfully restore the mechanical stability of the injured knee, postsurgical outcomes remain widely varied. Less than half of athletes who undergo ACLR return to sport within the first year after surgery, and it is estimated that approximately 1 in 4 to 1 in 5 young, active athletes who undergo ACLR will go on to a second knee injury. The outcomes after a second knee injury and surgery are significantly less favorable than outcomes after primary injuries. As advances in graft reconstruction and fixation techniques have improved to consistently restore passive joint stability to the preinjury level, successful return to sport after ACLR appears to be predicated on numerous postsurgical factors. Importantly, a secondary ACL injury is most strongly related to modifiable postsurgical risk factors. Biomechanical abnormalities and movement asymmetries, which are more prevalent in this cohort than previously hypothesized, can persist despite high levels of functional performance, and also represent biomechanical and neuromuscular control deficits and imbalances that are strongly associated with secondary injury incidence. Decreased neuromuscular control and high-risk movement biomechanics, which appear to be heavily influenced by abnormal trunk and lower extremity movement patterns, not only predict first knee injury risk but also reinjury risk. These seminal findings indicate that abnormal movement biomechanics and neuromuscular control profiles are likely both residual to, and exacerbated by, the initial injury. Evidence-based medicine (EBM) strategies should be used to develop effective, efficacious interventions targeted to these impairments to optimize the safe return to high-risk activity. In this Current Concepts article, the authors present the latest evidence related to risk factors associated with ligament failure or a secondary (contralateral) injury in athletes who return to sport after ACLR. From these data, they propose an EBM paradigm shift in postoperative rehabilitation and return-to-sport training after ACLR that is focused on the resolution of neuromuscular deficits that commonly persist after surgical reconstruction and standard rehabilitation of athletes.",
"title": ""
},
{
"docid": "d0f187a8f7f6d4a6f8061a486f89c6bd",
"text": "The science of ecology was born from the expansive curiosity of the biologists of the late 19th century, who wished to understand the distribution, abundance and interactions of the earth's organisms. Why do we have so many species, and why not more, they asked--and what causes them to be distributed as they are? What are the characteristics of a biological community that cause it to recover in a particular way after a disturbance?",
"title": ""
},
{
"docid": "edb0442d3e3216a5e1add3a03b05858a",
"text": "The resilience perspective is increasingly used as an approach for understanding the dynamics of social–ecological systems. This article presents the origin of the resilience perspective and provides an overview of its development to date. With roots in one branch of ecology and the discovery of multiple basins of attraction in ecosystems in the 1960–1970s, it inspired social and environmental scientists to challenge the dominant stable equilibrium view. The resilience approach emphasizes non-linear dynamics, thresholds, uncertainty and surprise, how periods of gradual change interplay with periods of rapid change and how such dynamics interact across temporal and spatial scales. The history was dominated by empirical observations of ecosystem dynamics interpreted in mathematical models, developing into the adaptive management approach for responding to ecosystem change. Serious attempts to integrate the social dimension is currently taking place in resilience work reflected in the large numbers of sciences involved in explorative studies and new discoveries of linked social–ecological systems. Recent advances include understanding of social processes like, social learning and social memory, mental models and knowledge–system integration, visioning and scenario building, leadership, agents and actor groups, social networks, institutional and organizational inertia and change, adaptive capacity, transformability and systems of adaptive governance that allow for management of essential ecosystem services. r 2006 Published by Elsevier Ltd.",
"title": ""
},
{
"docid": "f0fa6c2b9216192ed0cf419e9f3c9666",
"text": "Primary task of a recommender system is to improve user’s experience by recommending relevant and interesting items to the users. To this effect, diversity in item suggestion is as important as the accuracy of recommendations. Existing literature aimed at improving diversity primarily suggests a 2-stage mechanism – an existing CF scheme for rating prediction, followed by a modified ranking strategy. This approach requires heuristic selection of parameters and ranking strategies. Also most works focus on diversity from either the user or system’s perspective. In this work, we propose a single stage optimization based solution to achieve high diversity while maintaining requisite levels of accuracy. We propose to incorporate additional diversity enhancing constraints, in the matrix factorization model for collaborative filtering. However, unlike traditional MF scheme generating dense user and item latent factor matrices, our base MF model recovers a dense user and a sparse item latent factor matrix; based on a recent work. The idea is motivated by the fact that although a user will demonstrate some affinity towards all latent factors, an item will never possess all features; thereby yielding a sparse structure. We also propose an algorithm for our formulation. The superiority of our model over existing state of the art techniques is demonstrated by the results of experiments conducted on real world movie database. © 2016 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "dd97e87fee154f610e406c1cf9170abe",
"text": "Magnetically-driven micrometer to millimeter-scale robotic devices have recently shown great capabilities for remote applications in medical procedures, in microfluidic tools and in microfactories. Significant effort recently has been on the creation of mobile or stationary devices with multiple independently-controllable degrees of freedom (DOF) for multiagent or complex mechanism motions. In most applications of magnetic microrobots, however, the relatively large distance from the field generation source and the microscale devices results in controlling magnetic field signals which are applied homogeneously over all agents. While some progress has been made in this area allowing up to six independent DOF to be individually commanded, there has been no rigorous effort in determining the maximum achievable number of DOF for systems with homogeneous magnetic field input. In this work, we show that this maximum is eight and we introduce the theoretical basis for this conclusion, relying on the number of independent usable components in a magnetic field at a point. In order to verify the claim experimentally, we develop a simple demonstration mechanism with 8 DOF designed specifically to show independent actuation. Using this mechanism with $500 \\mu \\mathrm{m}$ magnetic elements, we demonstrate eight independent motions of 0.6 mm with 8.6 % coupling using an eight coil system. These results will enable the creation of richer outputs in future microrobotic devices.",
"title": ""
},
{
"docid": "b1272039194d07ff9b7568b7f295fbfb",
"text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.",
"title": ""
}
] | scidocsrr |
0693209386b1531a62d4e5726c021392 | Loughborough University Institutional Repository Understanding Generation Y and their use of social media : a review and research agenda | [
{
"docid": "b4880ddb59730f465f585f3686d1d2b1",
"text": "The authors study the effect of word-of-mouth (WOM) marketing on member growth at an Internet social networking site and compare it with traditional marketing vehicles. Because social network sites record the electronic invitations sent out by existing members, outbound WOM may be precisely tracked. WOM, along with traditional marketing, can then be linked to the number of new members subsequently joining the site (signups). Due to the endogeneity among WOM, new signups, and traditional marketing activity, the authors employ a Vector Autoregression (VAR) modeling approach. Estimates from the VAR model show that word-ofmouth referrals have substantially longer carryover effects than traditional marketing actions. The long-run elasticity of signups with respect to WOM is estimated to be 0.53 (substantially larger than the average advertising elasticities reported in the literature) and the WOM elasticity is about 20 times higher than the elasticity for marketing events, and 30 times that of media appearances. Based on revenue from advertising impressions served to a new member, the monetary value of a WOM referral can be calculated; this yields an upper bound estimate for the financial incentives the firm might offer to stimulate word-of-mouth.",
"title": ""
}
] | [
{
"docid": "fe397e4124ef517268aaabd999bc02c4",
"text": "A new frequency-reconfigurable quasi-Yagi dipole antenna is presented. It consists of a driven dipole element with two varactors in two arms, a director with an additional varactor, a truncated ground plane reflector, a microstrip-to-coplanar-stripline (CPS) transition, and a novel biasing circuit. The effective electrical length of the director element and that of the driven arms are adjusted together by changing the biasing voltages. A 35% continuously frequency-tuning bandwidth, from 1.80 to 2.45 GHz, is achieved. This covers a number of wireless communication systems, including 3G UMTS, US WCS, and WLAN. The length-adjustable director allows the endfire pattern with relatively high gain to be maintained over the entire tuning bandwidth. Measured results show that the gain varies from 5.6 to 7.6 dBi and the front-to-back ratio is better than 10 dB. The H-plane cross polarization is below -15 dB, and that in the E-plane is below -20 dB.",
"title": ""
},
{
"docid": "7e1c0505e40212ef0e8748229654169f",
"text": "This article addresses the concept of quality risk in outsourcing. Recent trends in outsourcing extend a contract manufacturer’s (CM’s) responsibility to several functional areas, such as research and development and design in addition to manufacturing. This trend enables an original equipment manufacturer (OEM) to focus on sales and pricing of its product. However, increasing CM responsibilities also suggest that the OEM’s product quality is mainly determined by its CM. We identify two factors that cause quality risk in this outsourcing relationship. First, the CM and the OEM may not be able to contract on quality; second, the OEM may not know the cost of quality to the CM. We characterize the effects of these two quality risk factors on the firms’ profits and on the resulting product quality. We determine how the OEM’s pricing strategy affects quality risk. We show, for example, that the effect of noncontractible quality is higher than the effect of private quality cost information when the OEM sets the sales price after observing the product’s quality. We also show that committing to a sales price mitigates the adverse effect of quality risk. To obtain these results, we develop and analyze a three-stage decision model. This model is also used to understand the impact of recent information technologies on profits and product quality. For example, we provide a decision tree that an OEM can use in deciding whether to invest in an enterprise-wide quality management system that enables accounting of quality-related activities across the supply chain. © 2009 Wiley Periodicals, Inc. Naval Research Logistics 56: 669–685, 2009",
"title": ""
},
{
"docid": "46072702edbe5177e48510fe37b77943",
"text": "Due to the explosive increase of online images, content-based image retrieval has gained a lot of attention. The success of deep learning techniques such as convolutional neural networks have motivated us to explore its applications in our context. The main contribution of our work is a novel end-to-end supervised learning framework that learns probability-based semantic-level similarity and feature-level similarity simultaneously. The main advantage of our novel hashing scheme that it is able to reduce the computational cost of retrieval significantly at the state-of-the-art efficiency level. We report on comprehensive experiments using public available datasets such as Oxford, Holidays and ImageNet 2012 retrieval datasets.",
"title": ""
},
{
"docid": "7d0020ff1a7500df1458ddfd568db7b4",
"text": "In this position paper, we address the problems of automated road congestion detection and alerting systems and their security properties. We review different theoretical adaptive road traffic control approaches, and three widely deployed adaptive traffic control systems (ATCSs), namely, SCATS, SCOOT and InSync. We then discuss some related research questions, and the corresponding possible approaches, as well as the adversary model and potential attack scenarios. Two theoretical concepts of automated road congestion alarm systems (including system architecture, communication protocol, and algorithms) are proposed on top of ATCSs, such as SCATS, SCOOT and InSync, by incorporating secure wireless vehicle-to-infrastructure (V2I) communications. Finally, the security properties of the proposed system have been discussed and analysed using the ProVerif protocol verification tool.",
"title": ""
},
{
"docid": "0882fc46d918957e73d0381420277bdc",
"text": "The term ‘resource use efficiency in agriculture’ may be broadly defined to include the concepts of technical efficiency, allocative efficiency and environmental efficiency. An efficient farmer allocates his land, labour, water and other resources in an optimal manner, so as to maximise his income, at least cost, on sustainable basis. However, there are countless studies showing that farmers often use their resources sub-optimally. While some farmers may attain maximum physical yield per unit of land at a high cost, some others achieve maximum profit per unit of inputs used. Also in the process of achieving maximum yield and returns, some farmers may ignore the environmentally adverse consequences, if any, of their resource use intensity. Logically all enterprising farmers would try to maximise their farm returns by allocating resources in an efficient manner. But as resources (both qualitatively and quantitatively) and managerial efficiency of different farmers vary widely, the net returns per unit of inputs used also vary significantly from farm to farm. Also a farmer’s access to technology, credit, market and other infrastructure and policy support, coupled with risk perception and risk management capacity under erratic weather and price situations would determine his farm efficiency. Moreover, a farmer knowingly or unknowingly may over-exploit his land and water resources for maximising farm income in the short run, thereby resulting in soil and water degradation and rapid depletion of ground water, and also posing a problem of sustainability of agriculture in the long run. In fact, soil degradation, depletion of groundwater and water pollution due to farmers’ managerial inefficiency or otherwise, have a social cost, while farmers who forego certain agricultural practices which cause any such sustainability problem may have a high opportunity cost. Furthermore, a farmer may not be often either fully aware or properly guided and aided for alternative, albeit best possible uses of his scarce resources like land and water. Thus, there are economic as well as environmental aspects of resource use efficiency. In addition, from the point of view of public exchequer, the resource use efficiency would mean that public investment, subsidies and credit for agriculture are",
"title": ""
},
{
"docid": "f611ccffbe10acb7dcbd6cb8f7ffaeaa",
"text": "We study the problem of single-image depth estimation for images in the wild. We collect human annotated surface normals and use them to help train a neural network that directly predicts pixel-wise depth. We propose two novel loss functions for training with surface normal annotations. Experiments on NYU Depth, KITTI, and our own dataset demonstrate that our approach can significantly improve the quality of depth estimation in the wild.",
"title": ""
},
{
"docid": "6cf2ffb0d541320b1ad04dc3b9e1c9a4",
"text": "Prediction of potential fraudulent activities may prevent both the stakeholders and the appropriate regulatory authorities of national or international level from being deceived. The objective difficulties on collecting adequate data that are obsessed by completeness affects the reliability of the most supervised Machine Learning methods. This work examines the effectiveness of forecasting fraudulent financial statements using semi-supervised classification techniques (SSC) that require just a few labeled examples for achieving robust learning behaviors mining useful data patterns from a larger pool of unlabeled examples. Based on data extracted from Greek firms, a number of comparisons between supervised and semi-supervised algorithms has been conducted. According to the produced results, the later algorithms are favored being examined over several scenarios of different Labeled Ratio (R) values.",
"title": ""
},
{
"docid": "b4ed57258b85ab4d81d5071fc7ad2cc9",
"text": "We present LEAR (Lexical Entailment AttractRepel), a novel post-processing method that transforms any input word vector space to emphasise the asymmetric relation of lexical entailment (LE), also known as the IS-A or hyponymy-hypernymy relation. By injecting external linguistic constraints (e.g., WordNet links) into the initial vector space, the LE specialisation procedure brings true hyponymyhypernymy pairs closer together in the transformed Euclidean space. The proposed asymmetric distance measure adjusts the norms of word vectors to reflect the actual WordNetstyle hierarchy of concepts. Simultaneously, a joint objective enforces semantic similarity using the symmetric cosine distance, yielding a vector space specialised for both lexical relations at once. LEAR specialisation achieves state-of-the-art performance in the tasks of hypernymy directionality, hypernymy detection, and graded lexical entailment, demonstrating the effectiveness and robustness of the proposed asymmetric specialisation model.",
"title": ""
},
{
"docid": "04756d4dfc34215c8acb895ecfcfb406",
"text": "The author describes five separate projects he has undertaken in the intersection of computer science and Canadian income tax law. They are:A computer-assisted instruction (CAI) course for teaching income tax, programmed using conventional CAI techniques;\nA “document modeling” computer program for generating the documentation for a tax-based transaction and advising the lawyer-user as to what decisions should be made and what the tax effects will be, programmed in a conventional language;\nA prototype expert system for determining the income tax effects of transactions and tax-defined relationships, based on a PROLOG representation of the rules of the Income Tax Act;\nAn intelligent CAI (ICAI) system for generating infinite numbers of randomized quiz questions for students, computing the answers, and matching wrong answers to particular student errors, based on a PROLOG representation of the rules of the Income Tax Act; and\nA Hypercard stack for providing information about income tax, enabling both education and practical research to follow the user's needs path.\n\nThe author shows that non-AI approaches are a way to produce packages quickly and efficiently. Their primary disadvantage is the massive rewriting required when the tax law changes. AI approaches based on PROLOG, on the other hand, are harder to develop to a practical level but will be easier to audit and maintain. The relationship between expert systems and CAI is discussed.",
"title": ""
},
{
"docid": "9500dfc92149c5a808cec89b140fc0c3",
"text": "We present a new approach to the geometric alignment of a point cloud to a surface and to related registration problems. The standard algorithm is the familiar ICP algorithm. Here we provide an alternative concept which relies on instantaneous kinematics and on the geometry of the squared distance function of a surface. The proposed algorithm exhibits faster convergence than ICP; this is supported both by results of a local convergence analysis and by experiments.",
"title": ""
},
{
"docid": "a2258145e9366bfbf515b3949b2d70fa",
"text": "Affect intensity (AI) may reconcile 2 seemingly paradoxical findings: Women report more negative affect than men but equal happiness as men. AI describes people's varying response intensity to identical emotional stimuli. A college sample of 66 women and 34 men was assessed on both positive and negative affect using 4 measurement methods: self-report, peer report, daily report, and memory performance. A principal-components analysis revealed an affect balance component and an AI component. Multimeasure affect balance and AI scores were created, and t tests were computed that showed women to be as happy as and more intense than men. Gender accounted for less than 1% of the variance in happiness but over 13% in AI. Thus, depression findings of more negative affect in women do not conflict with well-being findings of equal happiness across gender. Generally, women's more intense positive emotions balance their higher negative affect.",
"title": ""
},
{
"docid": "47505c95f8a3cf136b3b5a76847990fc",
"text": "We present a hybrid algorithm to compute the convex hull of points in three or higher dimensional spaces. Our formulation uses a GPU-based interior point filter to cull away many of the points that do not lie on the boundary. The convex hull of remaining points is computed on a CPU. The GPU-based filter proceeds in an incremental manner and computes a pseudo-hull that is contained inside the convex hull of the original points. The pseudo-hull computation involves only localized operations and maps well to GPU architectures. Furthermore, the underlying approach extends to high dimensional point sets and deforming points. In practice, our culling filter can reduce the number of candidate points by two orders of magnitude. We have implemented the hybrid algorithm on commodity GPUs, and evaluated its performance on several large point sets. In practice, the GPU-based filtering algorithm can cull up to 85M interior points per second on an NVIDIA GeForce GTX 580 and the hybrid algorithm improves the overall performance of convex hull computation by 10 − 27 times (for static point sets) and 22 − 46 times (for deforming point sets).",
"title": ""
},
{
"docid": "a83ba31bdf54c9dec09788bfb1c972fc",
"text": "In 1999, ISPOR formed the Quality of Life Special Interest group (QoL-SIG)--Translation and Cultural Adaptation group (TCA group) to stimulate discussion on and create guidelines and standards for the translation and cultural adaptation of patient-reported outcome (PRO) measures. After identifying a general lack of consistency in current methods and published guidelines, the TCA group saw a need to develop a holistic perspective that synthesized the full spectrum of published methods. This process resulted in the development of Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice (PGP), a report on current methods, and an appraisal of their strengths and weaknesses. The TCA Group undertook a review of evidence from current practice, a review of the literature and existing guidelines, and consideration of the issues facing the pharmaceutical industry, regulators, and the broader outcomes research community. Each approach to translation and cultural adaptation was considered systematically in terms of rationale, components, key actors, and the potential benefits and risks associated with each approach and step. The results of this review were subjected to discussion and challenge within the TCA group, as well as consultation with the outcomes research community at large. Through this review, a consensus emerged on a broad approach, along with a detailed critique of the strengths and weaknesses of the differing methodologies. The results of this review are set out as \"Translation and Cultural Adaptation of Patient Reported Outcomes Measures--Principles of Good Practice\" and are reported in this document.",
"title": ""
},
{
"docid": "ba65c99adc34e05cf0cd1b5618a21826",
"text": "We investigate a family of bugs in blockchain-based smart contracts, which we call event-ordering (or EO) bugs. These bugs are intimately related to the dynamic ordering of contract events, i.e., calls of its functions on the blockchain, and enable potential exploits of millions of USD worth of Ether. Known examples of such bugs and prior techniques to detect them have been restricted to a small number of event orderings, typicall 1 or 2. Our work provides a new formulation of this general class of EO bugs as finding concurrency properties arising in long permutations of such events. The technical challenge in detecting our formulation of EO bugs is the inherent combinatorial blowup in path and state space analysis, even for simple contracts. We propose the first use of partial-order reduction techniques, using happen-before relations extracted automatically for contracts, along with several other optimizations built on a dynamic symbolic execution technique. We build an automatic tool called ETHRACER that requires no hints from users and runs directly on Ethereum bytecode. It flags 7-11% of over ten thousand contracts analyzed in roughly 18.5 minutes per contract, providing compact event traces that human analysts can run as witnesses. These witnesses are so compact that confirmations require only a few minutes of human effort. Half of the flagged contracts have subtle EO bugs, including in ERC-20 contracts that carry hundreds of millions of dollars worth of Ether. Thus, ETHRACER is effective at detecting a subtle yet dangerous class of bugs which existing tools miss.",
"title": ""
},
{
"docid": "70c6da9da15ad40b4f64386b890ccf51",
"text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.",
"title": ""
},
{
"docid": "0ec0b6797069ee5bd737ea787cba43ef",
"text": "Evaluation of retrieval performance is a crucial problem in content-based image retrieval (CBIR). Many different methods for measuring the performance of a system have been created and used by researchers. This article discusses the advantages and shortcomings of the performance measures currently used. Problems such as a common image database for performance comparisons and a means of getting relevance judgments (or ground truth) for queries are explained. The relationship between CBIR and information retrieval (IR) is made clear, since IR researchers have decades of experience with the evaluation problem. Many of their solutions can be used for CBIR, despite the differences between the fields. Several methods used in text retrieval are explained. Proposals for performance measures and means of developing a standard test suite for CBIR, similar to that used in IR at the annual Text REtrieval Conference (TREC), are presented. MULLER, Henning, et al. Performance Evaluation in Content-Based Image Retrieval: Overview and Proposals. Genève : 1999",
"title": ""
},
{
"docid": "c26e9f486621e37d66bf0925d8ff2a3e",
"text": "We report the first two Malaysian children with partial deletion 9p syndrome, a well delineated but rare clinical entity. Both patients had trigonocephaly, arching eyebrows, anteverted nares, long philtrum, abnormal ear lobules, congenital heart lesions and digital anomalies. In addition, the first patient had underdeveloped female genitalia and anterior anus. The second patient had hypocalcaemia and high arched palate and was initially diagnosed with DiGeorge syndrome. Chromosomal analysis revealed a partial deletion at the short arm of chromosome 9. Karyotyping should be performed in patients with craniostenosis and multiple abnormalities as an early syndromic diagnosis confers prognostic, counselling and management implications.",
"title": ""
},
{
"docid": "c9c98e50a49bbc781047dc425a2d6fa1",
"text": "Understanding wound healing today involves much more than simply stating that there are three phases: \"inflammation, proliferation, and maturation.\" Wound healing is a complex series of reactions and interactions among cells and \"mediators.\" Each year, new mediators are discovered and our understanding of inflammatory mediators and cellular interactions grows. This article will attempt to provide a concise report of the current literature on wound healing by first reviewing the phases of wound healing followed by \"the players\" of wound healing: inflammatory mediators (cytokines, growth factors, proteases, eicosanoids, kinins, and more), nitric oxide, and the cellular elements. The discussion will end with a pictorial essay summarizing the wound-healing process.",
"title": ""
},
{
"docid": "ceedf70c92099fc8612a38f91f2c9507",
"text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.",
"title": ""
},
{
"docid": "fd32bf580b316634e44a8c37adfab2eb",
"text": "In a previous paper we reported the successful use of graph coloring techniques for doing global register allocation in an experimental PL/I optimizing compiler. When the compiler cannot color the register conflict graph with a number of colors equal to the number of available machine registers, it must add code to spill and reload registers to and from storage. Previously the compiler produced spill code whose quality sometimes left much to be desired, and the ad hoc techniques used took considerable amounts of compile time. We have now discovered how to extend the graph coloring approach so that it naturally solves the spilling problem. Spill decisions are now made on the basis of the register conflict graph and cost estimates of the value of keeping the result of a computation in a register rather than in storage. This new approach produces better object code and takes much less compile time.",
"title": ""
}
] | scidocsrr |
103788d6f36997cc1e6cd103155e537d | A survey of data mining techniques for analyzing crime patterns | [
{
"docid": "f074965ee3a1d6122f1e68f49fd11d84",
"text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.",
"title": ""
},
{
"docid": "bbdb4a930ef77f91e8d76dd3a7e0f506",
"text": "Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters. In particular, hierarchical clustering solutions provide a view of the data at different levels of granularity, making them ideal for people to visualize and interactively explore large document collections.In this paper we evaluate different partitional and agglomerative approaches for hierarchical clustering. Our experimental evaluation showed that partitional algorithms always lead to better clustering solutions than agglomerative algorithms, which suggests that partitional clustering algorithms are well-suited for clustering large document datasets due to not only their relatively low computational requirements, but also comparable or even better clustering performance. We present a new class of clustering algorithms called constrained agglomerative algorithms that combine the features of both partitional and agglomerative algorithms. Our experimental results showed that they consistently lead to better hierarchical solutions than agglomerative or partitional algorithms alone.",
"title": ""
}
] | [
{
"docid": "3023637fd498bb183dae72135812c304",
"text": "computational method for its solution. A Psychological Description of LSA as a Theory of Learning, Memory, and Knowledge We give a more complete description of LSA as a mathematical model later when we use it to simulate lexical acquisition. However, an overall outline is necessary to understand a roughly equivalent psychological theory we wish to present first. The input to LSA is a matrix consisting of rows representing unitary event types by columns representing contexts in which instances of the event types appear. One example is a matrix of unique word types by many individual paragraphs in which the words are encountered, where a cell contains the number of times that a particular word type, say model, appears in a particular paragraph, say this one. After an initial transformation of the cell entries, this matrix is analyzed by a statistical technique called singular value decomposition (SVD) closely akin to factor analysis, which allows event types and individual contexts to be re-represented as points or vectors in a high dimensional abstract space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or con-space (Golub, Lnk, & Overton, 1981 ). The final output is a representation from which one can calculate similarity measures between all pairs consisting of either event types or contexts (e.g., word-word, word-paragraph, or paragraph-paragraph similarities). Psychologically, the data that the model starts with are raw, first-order co-occurrence relations between stimuli and the local contexts or episodes in which they occur. The stimuli or event types may be thought of as unitary chunks of perception or memory. The first-order process by which initial pairwise associations are entered and transformed in LSA resembles classical conditioning in that it depends on contiguity or co-occurrence, but weights the result first nonlinearly with local occurrence frequency, then inversely with a function of the number of different contexts in which the particular component is encountered overall and the extent to which its occurrences are spread evenly over contexts. However, there are possibly important differences in the details as currently implemented; in particular, LSA associations are symmetrical; a context is associated with the individual events it contains by the same cell entry as the events are associated with the context. This would not be a necessary feature of the model; it would be possible to make the initial matrix asymmetrical, with a cell indicating the co-occurrence relation, for example, between a word and closely following words. Indeed, Lund and Burgess (in press; Lund, Burgess, & Atchley, 1995), and SchUtze (1992a, 1992b), have explored related models in which such data are the input. The first step of the LSA analysis is to transform each cell entry from the number of times that a word appeared in a particular context to the log of that frequency. This approximates the standard empirical growth functions of simple learning. The fact that this compressive function begins anew with each context also yields a kind of spacing effect; the association of A and B is greater if both appear in two different contexts than if they each appear twice in one context. In a second transformation, all cell entries for a given word are divided by the entropy for that word, Z p log p over all its contexts. Roughly speaking, this step accomplishes much the same thing as conditioning rules such as those described by Rescorla & Wagner (1972), in that it makes the primary association better represent the informative relation between the entities rather than the mere fact that they occurred together. Somewhat more formally, the inverse entropy measure estimates the degree to which observing the occurrence of a component specifies what context it is in; the larger the entropy of, say, a word, the less information its observation transmits about the places it has occurred, so the less usage-defined meaning it acquires, and conversely, the less the meaning of a particular context is determined by containing the word. It is interesting to note that automatic information retrieval methods (including LSA when used for the purpose) are greatly improved by transformations of this general form, the present one usually appearing to be the best (Harman, 1986). It does not seem far-fetched to believe that the necessary transform for good information retrieval, retrieval that brings back text corresponding to what a person has in mind when the person offers one or more query words, corresponds to the functional relations in basic associative processes. Anderson (1990) has drawn attention to the analogy between information retrieval in external systems and those in the human mind. It is not clear which way the relationship goes. Does information retrieval in automatic systems work best when it mimics the circumstances that make people think two things are related, or is there a general logic that tends to make them have similar forms? In automatic information retrieval the logic is usually assumed to be that idealized searchers have in mind exactly the same text as they would like the system to find and draw the words in 2 Although this exploratory process takes some advantage of chance, there is no reason why any number of dimensions should be much better than any other unless some mechanism like the one proposed is at work. In all cases, the model's remaining parameters were fitted only to its input (training) data and not to the criterion (generalization) test. THE LATENT SEMANTIC ANALYSIS THEORY OF KNOWLEDGE 217 their queries from that text (see Bookstein & Swanson, 1974). Then the system's challenge is to estimate the probability that each text in its store is the one that the searcher was thinking about. This characterization, then, comes full circle to the kind of communicative agreement model we outlined above: The sender issues a word chosen to express a meaning he or she has in mind, and the receiver tries to estimate the probability of each of the sender's possible messages. Gallistel (1990), has argued persuasively for the need to separate local conditioning or associative processes from global representation of knowledge. The LSA model expresses such a separation in a very clear and precise way. The initial matrix after transformation to log frequency divided by entropy represents the product of the local or pairwise processes? The subsequent analysis and dimensionality reduction takes all of the previously acquired local information and turns it into a unified representation of knowledge. Thus, the first processing step of the model, modulo its associational symmetry, is a rough approximation to conditioning or associative processes. However, the model's next steps, the singular value decomposition and dimensionality optimization, are not contained as such in any extant psychological theory of learning, although something of the kind may be hinted at in some modem discussions of conditioning and, on a smaller scale and differently interpreted, is often implicit and sometimes explicit in many neural net and spreading-activation architectures. This step converts the transformed associative data into a condensed representation. The condensed representation can be seen as achieving several things, although they are at heart the result of only one mechanism. First, the re-representation captures indirect, higher-order associations. That is, jf a particular stimulus, X, (e.g., a word) has been associated with some other stimulus, Y, by being frequently found in joint context (i.e., contiguity), and Y is associated with Z, then the condensation can cause X and Z to have similar representations. However, the strength of the indirect XZ association depends on much more than a combination of the strengths of XY and YZ. This is because the relation between X and Z also depends, in a wellspecified manner, on the relation of each of the stimuli, X, Y, and Z, to every other entity in the space. In the past, attempts to predict indirect associations by stepwise chaining rules have not been notably successful (see, e.g., Pollio, 1968; Young, 1968). If associations correspond to distances in space, as supposed by LSA, stepwise chaining rules would not be expected to work well; if X is two units from Y and Y is two units from Z, all we know about the distance from X to Z is that it must be between zero and four. But with data about the distances between X, Y, Z, and other points, the estimate of XZ may be greatly improved by also knowing XY and YZ. An alternative view of LSA's effects is the one given earlier, the induction of a latent higher order similarity structure (thus its name) among representations of a large collection of events. Imagine, for example, that every time a stimulus (e.g., a word) is encountered, the distance between its representation and that of every other stimulus that occurs in close proximity to it is adjusted to be slightly smaller. The adjustment is then allowed to percolate through the whole previously constructed structure of relations, each point pulling on its neighbors until all settle into a compromise configuration (physical objects, weather systems, and Hopfield nets do this too; Hopfield, 1982). It is easy to see that the resulting relation between any two representations depends not only on direct experience with them but with everything else ever experienced. Although the current mathematical implementation of LSA does not work in this incremental way, its effects are much the same. The question, then, is whether such a mechanism, when combined with the statistics of experience, produces a faithful reflection of human knowledge. Finally, to anticipate what is developed later, the computational scheme used by LSA for combining and condensing local information into a common",
"title": ""
},
{
"docid": "fe8c27e7ef05816cc4c4e2c68eeaf2f9",
"text": "Chassis cavities have recently been proposed as a new mounting position for vehicular antennas. Cavities can be concealed and potentially offer more space for antennas than shark-fin modules mounted on top of the roof. An antenna cavity for the front or rear edge of the vehicle roof is designed, manufactured and measured for 5.9 GHz. The cavity offers increased radiation in the horizontal plane and to angles below horizon, compared to cavities located in the roof center.",
"title": ""
},
{
"docid": "16c6e41746c451d66b43c5736f622cda",
"text": "In this study, we report a multimodal energy harvesting device that combines electromagnetic and piezoelectric energy harvesting mechanism. The device consists of piezoelectric crystals bonded to a cantilever beam. The tip of the cantilever beam has an attached permanent magnet which, oscillates within a stationary coil fixed to the top of the package. The permanent magnet serves two purpose (i) acts as a tip mass for the cantilever beam and lowers the resonance frequency, and (ii) acts as a core which oscillates between the inductive coils resulting in electric current generation through Faraday’s effect. Thus, this design combines the energy harvesting from two different mechanisms, piezoelectric and electromagnetic, on the same platform. The prototype system was optimized using the finite element software, ANSYS, to find the resonance frequency and stress distribution. The power generated from the fabricated prototype was found to be 0.25W using the electromagnetic mechanism and 0.25mW using the piezoelectric mechanism at 35 g acceleration and 20Hz frequency.",
"title": ""
},
{
"docid": "79798f4fbe3cffdf7c90cc5349bf0531",
"text": "When a software system starts behaving abnormally during normal operations, system administrators resort to the use of logs, execution traces, and system scanners (e.g., anti-malwares, intrusion detectors, etc.) to diagnose the cause of the anomaly. However, the unpredictable context in which the system runs and daily emergence of new software threats makes it extremely challenging to diagnose anomalies using current tools. Host-based anomaly detection techniques can facilitate the diagnosis of unknown anomalies but there is no common platform with the implementation of such techniques. In this paper, we propose an automated anomaly detection framework (Total ADS) that automatically trains different anomaly detection techniques on a normal trace stream from a software system, raise anomalous alarms on suspicious behaviour in streams of trace data, and uses visualization to facilitate the analysis of the cause of the anomalies. Total ADS is an extensible Eclipse-based open source framework that employs a common trace format to use different types of traces, a common interface to adapt to a variety of anomaly detection techniques (e.g., HMM, sequence matching, etc.). Our case study on a modern Linux server shows that Total ADS automatically detects attacks on the server, shows anomalous paths in traces, and provides forensic insights.",
"title": ""
},
{
"docid": "c7a9efee2b447cbadc149717ad7032ee",
"text": "We introduce a novel method to learn a policy from unsupervised demonstrations of a process. Given a model of the system and a set of sequences of outputs, we find a policy that has a comparable performance to the original policy, without requiring access to the inputs of these demonstrations. We do so by first estimating the inputs of the system from observed unsupervised demonstrations. Then, we learn a policy by applying vanilla supervised learning algorithms to the (estimated)input-output pairs. For the input estimation, we present a new adaptive linear estimator (AdaL-IE) that explicitly trades-off variance and bias in the estimation. As we show empirically, AdaL-IE produces estimates with lower error compared to the state-of-the-art input estimation method, (UMV-IE) [Gillijns and De Moor, 2007]. Using AdaL-IE in conjunction with imitation learning enables us to successfully learn control policies that consistently outperform those using UMV-IE.",
"title": ""
},
{
"docid": "7f0023af2f3df688aa58ae3317286727",
"text": "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem. The proposed methods can be applied with mobile queries, mobile objects or both, given a suitable indexing method. Our experimental evaluation is based on R-trees and their extensions for dynamic objects.",
"title": ""
},
{
"docid": "34901b8e3e7667e3a430b70a02595f69",
"text": "In the previous NTCIR8-GeoTime task, ABRIR (Appropriate Boolean query Reformulation for Information Retrieval) proved to be one of the most effective systems for retrieving documents with Geographic and Temporal constraints. However, failure analysis showed that the identification of named entities and relationships between these entities and the query is important in improving the quality of the system. In this paper, we propose to use Wikipedia and GeoNames as resources for extracting knowledge about named entities. We also modify our system to use such information.",
"title": ""
},
{
"docid": "dba1a222903031a6b3d064e6db29a108",
"text": "Social engineering is a method of attack involving the exploitation of human weakness, gullibility and ignorance. Although related techniques have existed for some time, current awareness of social engineering and its many guises is relatively low and efforts are therefore required to improve the protection of the user community. This paper begins by examining the problems posed by social engineering, and outlining some of the previous efforts that have been made to address the threat. This leads toward the discussion of a new awareness-raising website that has been specifically designed to aid users in understanding and avoiding the risks. Findings from an experimental trial involving 46 participants are used to illustrate that the system served to increase users’ understanding of threat concepts, as well as providing an engaging environment in which they would be likely to persevere with their learning.",
"title": ""
},
{
"docid": "fa0eebbf9c97942a5992ed80fd66cf10",
"text": "The increasing popularity of Facebook among adolescents has stimulated research to investigate the relationship between Facebook use and loneliness, which is particularly prevalent in adolescence. The aim of the present study was to improve our understanding of the relationship between Facebook use and loneliness. Specifically, we examined how Facebook motives and two relationship-specific forms of adolescent loneliness are associated longitudinally. Cross-lagged analysis based on data from 256 adolescents (64% girls, M(age) = 15.88 years) revealed that peer-related loneliness was related over time to using Facebook for social skills compensation, reducing feelings of loneliness, and having interpersonal contact. Facebook use for making new friends reduced peer-related loneliness over time, whereas Facebook use for social skills compensation increased peer-related loneliness over time. Hence, depending on adolescents' Facebook motives, either the displacement or the stimulation hypothesis is supported. Implications and suggestions for future research are discussed.",
"title": ""
},
{
"docid": "ff14cc28a72827c14aba42f3a036a088",
"text": "Employees’ failure to comply with IS security procedures is a key concern for organizations today. A number of socio-cognitive theories have been used to explain this. However, prior studies have not examined the influence of past and automatic behavior on employee decisions to comply. This is an important omission because past behavior has been assumed to strongly affect decision-making. To address this gap, we integrated habit (a routinized form of past behavior) with Protection Motivation Theory (PMT), to explain compliance. An empirical test showed that habitual IS security compliance strongly reinforced the cognitive processes theorized by PMT, as well as employee intention for future compliance. We also found that nearly all components of PMT significantly impacted employee intention to comply with IS security policies. Together, these results highlighted the importance of addressing employees’ past and automatic behavior in order to improve compliance. 2012 Elsevier B.V. All rights reserved. * Corresponding author. Tel.: +1 801 361 2531; fax: +1 509 275 0886. E-mail addresses: [email protected] (A. Vance), [email protected] (M. Siponen), [email protected] (S. Pahnila). URL: http://www.anthonyvance.com 1 http://www.issrc.oulu.fi/.",
"title": ""
},
{
"docid": "03d41408da6babfc97399c64860f50cd",
"text": "The nine degrees-of-freedom (DOF) inertial measurement units (IMU) are generally composed of three kinds of sensor: accelerometer, gyroscope and magnetometer. The calibration of these sensor suites not only requires turn-table or purpose-built fixture, but also entails a complex and laborious procedure in data sampling. In this paper, we propose a method to calibrate a 9-DOF IMU by using a set of casually sampled raw sensor measurement. Our sampling procedure allows the sensor suite to move by hand and only requires about six minutes of fast and slow arbitrary rotations with intermittent pauses. It requires neither the specially-designed fixture and equipment, nor the strict sequences of sampling steps. At the core of our method are the techniques of data filtering and a hierarchical scheme for calibration. All the raw sensor measurements are preprocessed by a series of band-pass filters before use. And our calibration scheme makes use of the gravity and the ambient magnetic field as references, and hierarchically calibrates the sensor model parameters towards the minimization of the mis-alignment, scaling and bias errors. Moreover, the calibration steps are formulated as a series of function optimization problems and are solved by an evolutionary algorithm. Finally, the performance of our method is experimentally evaluated. The results show that our method can effectively calibrate the sensor model parameters from one set of raw sensor measurement, and yield consistent calibration results.",
"title": ""
},
{
"docid": "8c0cbfc060b3a6aa03fd8305baf06880",
"text": "Learning-to-Rank models based on additive ensembles of regression trees have been proven to be very effective for scoring query results returned by large-scale Web search engines. Unfortunately, the computational cost of scoring thousands of candidate documents by traversing large ensembles of trees is high. Thus, several works have investigated solutions aimed at improving the efficiency of document scoring by exploiting advanced features of modern CPUs and memory hierarchies. In this article, we present QuickScorer, a new algorithm that adopts a novel cache-efficient representation of a given tree ensemble, performs an interleaved traversal by means of fast bitwise operations, and supports ensembles of oblivious trees. An extensive and detailed test assessment is conducted on two standard Learning-to-Rank datasets and on a novel very large dataset we made publicly available for conducting significant efficiency tests. The experiments show unprecedented speedups over the best state-of-the-art baselines ranging from 1.9 × to 6.6 × . The analysis of low-level profiling traces shows that QuickScorer efficiency is due to its cache-aware approach in terms of both data layout and access patterns and to a control flow that entails very low branch mis-prediction rates.",
"title": ""
},
{
"docid": "198944af240d732b6fadcee273c1ba18",
"text": "This paper presents a fast and energy-efficient current mirror based level shifter with wide shifting range from sub-threshold voltage up to I/O voltage. Small delay and low power consumption are achieved by addressing the non-full output swing and charge sharing issues in the level shifter from [4]. The measurement results show that the proposed level shifter can convert from 0.21V up to 3.3V with significantly improved delay and power consumption over the existing level shifters. Compared with [4], the maximum reduction of delay, switching energy and leakage power are 3X, 19X, 29X respectively when converting 0.3V to a higher voltage between 0.6V and 3.3V.",
"title": ""
},
{
"docid": "24f110f2b34e9da32fbd78ad242808bc",
"text": "BACKGROUND\nSurvey research including multiple health indicators requires brief indices for use in cross-cultural studies, which have, however, rarely been tested in terms of their psychometric quality. Recently, the EUROHIS-QOL 8-item index was developed as an adaptation of the WHOQOL-100 and the WHOQOL-BREF. The aim of the current study was to test the psychometric properties of the EUROHIS-QOL 8-item index.\n\n\nMETHODS\nIn a survey on 4849 European adults, the EUROHIS-QOL 8-item index was assessed across 10 countries, with equal samples adjusted for selected sociodemographic data. Participants were also investigated with a chronic condition checklist, measures on general health perception, mental health, health-care utilization and social support.\n\n\nRESULTS\nFindings indicated good internal consistencies across a range of countries, showing acceptable convergent validity with physical and mental health measures, and the measure discriminates well between individuals that report having a longstanding condition and healthy individuals across all countries. Differential item functioning was less frequently observed in those countries that were geographically and culturally closer to the UK, but acceptable across all countries. A universal one-factor structure with a good fit in structural equation modelling analyses (SEM) was identified with, however, limitations in model fit for specific countires.\n\n\nCONCLUSIONS\nThe short EUROHIS-QOL 8-item index showed good cross-cultural field study performance and a satisfactory convergent and discriminant validity, and can therefore be recommended for use in public health research. In future studies the measure should also be tested in multinational clinical studies, particularly in order to test its sensitivity.",
"title": ""
},
{
"docid": "1a7cfc19e7e3f9baf15e4a7450338c33",
"text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.",
"title": ""
},
{
"docid": "8b0870c8e975eeff8597eb342cd4f3f9",
"text": "We propose a novel recursive partitioning method for identifying subgroups of subjects with enhanced treatment effects based on a differential effect search algorithm. The idea is to build a collection of subgroups by recursively partitioning a database into two subgroups at each parent group, such that the treatment effect within one of the two subgroups is maximized compared with the other subgroup. The process of data splitting continues until a predefined stopping condition has been satisfied. The method is similar to 'interaction tree' approaches that allow incorporation of a treatment-by-split interaction in the splitting criterion. However, unlike other tree-based methods, this method searches only within specific regions of the covariate space and generates multiple subgroups of potential interest. We develop this method and provide guidance on key topics of interest that include generating multiple promising subgroups using different splitting criteria, choosing optimal values of complexity parameters via cross-validation, and addressing Type I error rate inflation inherent in data mining applications using a resampling-based method. We evaluate the operating characteristics of the procedure using a simulation study and illustrate the method with a clinical trial example.",
"title": ""
},
{
"docid": "a31287791b12f55adebacbb93a03c8bc",
"text": "Emotional adaptation increases pro-social behavior of humans towards robotic interaction partners. Social cues are an important factor in this context. This work investigates, if emotional adaptation still works under absence of human-like facial Action Units. A human-robot dialog scenario is chosen using NAO pretending to work for a supermarket and involving humans providing object names to the robot for training purposes. In a user study, two conditions are implemented with or without explicit emotional adaptation of NAO to the human user in a between-subjects design. Evaluations of user experience and acceptance are conducted based on evaluated measures of human-robot interaction (HRI). The results of the user study reveal a significant increase of helpfulness (number of named objects), anthropomorphism, and empathy in the explicit emotional adaptation condition even without social cues of facial Action Units, but only in case of prior robot contact of the test persons. Otherwise, an opposite effect is found. These findings suggest, that reduction of these social cues can be overcome by robot experience prior to the interaction task, e.g. realizable by an additional bonding phase, confirming the importance of such from previous work. Additionally, an interaction with academic background of the participants is found.",
"title": ""
},
{
"docid": "5a4d88bb879cf441808307961854c58c",
"text": "Activity prediction is an essential task in practical human-centered robotics applications, such as security, assisted living, etc., which targets at inferring ongoing human activities based on incomplete observations. To address this challenging problem, we introduce a novel bio-inspired predictive orientation decomposition (BIPOD) approach to construct representations of people from 3D skeleton trajectories. Our approach is inspired by biological research in human anatomy. In order to capture spatio-temporal information of human motions, we spatially decompose 3D human skeleton trajectories and project them onto three anatomical planes (i.e., coronal, transverse and sagittal planes); then, we describe short-term time information of joint motions and encode high-order temporal dependencies. By estimating future skeleton trajectories that are not currently observed, we endow our BIPOD representation with the critical predictive capability. Empirical studies validate that our BIPOD approach obtains promising performance, in terms of accuracy and efficiency, using a physical TurtleBot2 robotic platform to recognize ongoing human activities. Experiments on benchmark datasets further demonstrate that our new BIPOD representation significantly outperforms previous approaches for real-time activity classification and prediction from 3D human skeleton trajectories.",
"title": ""
},
{
"docid": "5ebddfaac62ec66171b65a776c1682b7",
"text": "We investigated the reliability of a test assessing quadriceps strength, endurance and fatigability in a single session. We used femoral nerve magnetic stimulation (FMNS) to distinguish central and peripheral factors of neuromuscular fatigue. We used a progressive incremental loading with multiple assessments to limit the influence of subject's cooperation and motivation. Twenty healthy subjects (10 men and 10 women) performed the test on two different days. Maximal voluntary strength and evoked quadriceps responses via FMNS were measured before, after each set of 10 submaximal isometric contractions (5-s on/5-s off; starting at 10% of maximal voluntary strength with 10% increments), immediately and 30min after task failure. The test induced progressive peripheral (41±13% reduction in single twitch at task failure) and central fatigue (3±7% reduction in voluntary activation at task failure). Good inter-day reliability was found for the total number of submaximal contractions achieved (i.e. endurance index: ICC=0.83), for reductions in maximal voluntary strength (ICC>0.81) and evoked muscular responses (i.e. fatigue index: ICC>0.85). Significant sex-differences were also detected. This test shows good reliability for strength, endurance and fatigability assessments. Further studies should be conducted to evaluate its feasibility and reliability in patients.",
"title": ""
}
] | scidocsrr |
09b8b665207ac2583f3c98d2a41e26fc | NewsCube: delivering multiple aspects of news to mitigate media bias | [
{
"docid": "7f05bd51c98140417ff73ec2d4420d6a",
"text": "An overwhelming number of news articles are available every day via the internet. Unfortunately, it is impossible for us to peruse more than a handful; furthermore it is difficult to ascertain an article’s social context, i.e., is it popular, what sorts of people are reading it, etc. In this paper, we develop a system to address this problem in the restricted domain of political news by harnessing implicit and explicit contextual information from the blogosphere. Specifically, we track thousands of blogs and the news articles they cite, collapsing news articles that have highly overlapping content. We then tag each article with the number of blogs citing it, the political orientation of those blogs, and the level of emotional charge expressed in the blog posts that link to the news article. We summarize and present the results to the user via a novel visualization which displays this contextual information; the user can then find the most popular articles, the articles most cited by liberals, the articles most emotionally discussed in the political blogosphere, etc.",
"title": ""
},
{
"docid": "212536baf7f5bd2635046774436e0dbf",
"text": "Mobile devices have already been widely used to access the Web. However, because most available web pages are designed for desktop PC in mind, it is inconvenient to browse these large web pages on a mobile device with a small screen. In this paper, we propose a new browsing convention to facilitate navigation and reading on a small-form-factor device. A web page is organized into a two level hierarchy with a thumbnail representation at the top level for providing a global view and index to a set of sub-pages at the bottom level for detail information. A page adaptation technique is also developed to analyze the structure of an existing web page and split it into small and logically related units that fit into the screen of a mobile device. For a web page not suitable for splitting, auto-positioning or scrolling-by-block is used to assist the browsing as an alterative. Our experimental results show that our proposed browsing convention and developed page adaptation scheme greatly improve the user's browsing experiences on a device with a small display.",
"title": ""
}
] | [
{
"docid": "5029feaec44e80561efef4b97c435896",
"text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.",
"title": ""
},
{
"docid": "fb00601b60bcd1f7a112e34d93d55d01",
"text": "Long Short-Term Memory (LSTM) has achieved state-of-the-art performances on a wide range of tasks. Its outstanding performance is guaranteed by the long-term memory ability which matches the sequential data perfectly and the gating structure controlling the information flow. However, LSTMs are prone to be memory-bandwidth limited in realistic applications and need an unbearable period of training and inference time as the model size is ever-increasing. To tackle this problem, various efficient model compression methods have been proposed. Most of them need a big and expensive pre-trained model which is a nightmare for resource-limited devices where the memory budget is strictly limited. To remedy this situation, in this paper, we incorporate the Sparse Evolutionary Training (SET) procedure into LSTM, proposing a novel model dubbed SET-LSTM. Rather than starting with a fully-connected architecture, SET-LSTM has a sparse topology and dramatically fewer parameters in both phases, training and inference. Considering the specific architecture of LSTMs, we replace the LSTM cells and embedding layers with sparse structures and further on, use an evolutionary strategy to adapt the sparse connectivity to the data. Additionally, we find that SET-LSTM can provide many different good combinations of sparse connectivity to substitute the overparameterized optimization problem of dense neural networks. Evaluated on four sentiment analysis classification datasets, the results demonstrate that our proposed model is able to achieve usually better performance than its fully connected counterpart while having less than 4% of its parameters. Department of Mathematics and Computer Science, Eindhoven University of Technology, Netherlands. Correspondence to: Shiwei Liu <[email protected]>.",
"title": ""
},
{
"docid": "881da6fd2d6c77d9f31ba6237c3d2526",
"text": "Pakistan is a developing country with more than half of its population located in rural areas. These areas neither have sufficient health care facilities nor a strong infrastructure that can address the health needs of the people. The expansion of Information and Communication Technology (ICT) around the globe has set up an unprecedented opportunity for delivery of healthcare facilities and infrastructure in these rural areas of Pakistan as well as in other developing countries. Mobile Health (mHealth)—the provision of health care services through mobile telephony—will revolutionize the way health care is delivered. From messaging campaigns to remote monitoring, mobile technology will impact every aspect of health systems. This paper highlights the growth of ICT sector and status of health care facilities in the developing countries, and explores prospects of mHealth as a transformer for health systems and service delivery especially in the remote rural areas.",
"title": ""
},
{
"docid": "4ea8351c57e4581bfdab4c7cd357c90a",
"text": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.",
"title": ""
},
{
"docid": "59bab56cb454b05eb4f12db425f4d0ce",
"text": "This study explores one of the contributors to group composition-the basis on which people choose others with whom they want to work. We use a combined model to explore individual attributes, relational attributes, and previous structural ties as determinants of work partner choice. Four years of data from participants in 33 small project groups were collected, some of which reflects individual participant characteristics and some of which is social network data measuring the previous relationship between two participants. Our results suggest that when selecting future group members people are biased toward others of the same race, others who have a reputation for being competent and hard working, and others with whom they have developed strong working relationships in the past. These results suggest that people strive for predictability when choosing future work group members. Copyright 2000 Academic Press.",
"title": ""
},
{
"docid": "661c99429dc6684ca7d6394f01201ac3",
"text": "SUMO is an open source traffic simulation package including net import and demand modeling components. We describe the current state of the package as well as future developments and extensions. SUMO helps to investigate several research topics e.g. route choice and traffic light algorithm or simulating vehicular communication. Therefore the framework is used in different projects to simulate automatic driving or traffic management strategies. Keywordsmicroscopic traffic simulation, software, open",
"title": ""
},
{
"docid": "f177b129e4a02fe42084563a469dc47d",
"text": "This paper proposes three design concepts for developing a crawling robot inspired by an inchworm, called the Omegabot. First, for locomotion, the robot strides by bending its body into an omega shape; anisotropic friction pads enable the robot to move forward using this simple motion. Second, the robot body is made of a single part but has two four-bar mechanisms and one spherical six-bar mechanism; the mechanisms are 2-D patterned into a single piece of composite and folded to become a robot body that weighs less than 1 g and that can crawl and steer. This design does not require the assembly of various mechanisms of the body structure, thereby simplifying the fabrication process. Third, a new concept for using a shape-memory alloy (SMA) coil-spring actuator is proposed; the coil spring is designed to have a large spring index and to work over a large pitch-angle range. This large-index-and-pitch SMA spring actuator cools faster and requires less energy, without compromising the amount of force and displacement that it can produce. Therefore, the frequency and the efficiency of the actuator are improved. A prototype was used to demonstrate that the inchworm-inspired, novel, small-scale, lightweight robot manufactured on a single piece of composite can crawl and steer.",
"title": ""
},
{
"docid": "0907539385c59f9bd476b2d1fb723a38",
"text": "We present a real-time method for synthesizing highly complex human motions using a novel training regime we call the auto-conditioned Recurrent Neural Network (acRNN). Recently, researchers have attempted to synthesize new motion by using autoregressive techniques, but existing methods tend to freeze or diverge after a couple of seconds due to an accumulation of errors that are fed back into the network. Furthermore, such methods have only been shown to be reliable for relatively simple human motions, such as walking or running. In contrast, our approach can synthesize arbitrary motions with highly complex styles, including dances or martial arts in addition to locomotion. The acRNN is able to accomplish this by explicitly accommodating for autoregressive noise accumulation during training. Our work is the first to our knowledge that demonstrates the ability to generate over 18,000 continuous frames (300 seconds) of new complex human motion w.r.t. different styles.",
"title": ""
},
{
"docid": "5f3dc141b69eb50e17bdab68a2195e13",
"text": "The purpose of this study is to develop a fuzzy-AHP multi-criteria decision making model for procurement process. It aims to measure the procurement performance in the automotive industry. As such measurement of procurement will enable competitive advantage and provide a model for continuous improvement. The rapid growth in the market and the level of competition in the global economy transformed procurement as a strategic issue; which is broader in scope and responsibilities as compared to purchasing. This study reviews the existing literature in procurement performance measurement to identify the key areas of measurement and a hierarchical model is developed with a set of generic measures. In addition, a questionnaire is developed for pair-wise comparison and to collect opinion from practitioners, researchers, managers etc. The relative importance of the measurement criteria are assessed using Analytical Hierarchy Process (AHP) and fuzzy-AHP. The validity of the model is c onfirmed with the results obtained.",
"title": ""
},
{
"docid": "fe300167bce299523d20d063417e6d31",
"text": "The n-gram language model, which has its roots in statistical natural language processing, has been shown to successfully capture the repetitive and predictable regularities (“naturalness\") of source code, and help with tasks such as code suggestion, porting, and designing assistive coding devices. However, we show in this paper that this natural-language-based model fails to exploit a special property of source code: localness. We find that human-written programs are localized: they have useful local regularities that can be captured and exploited. We introduce a novel cache language model that consists of both an n-gram and an added “cache\" component to exploit localness. We show empirically that the additional cache component greatly improves the n-gram approach by capturing the localness of software, as measured by both cross-entropy and suggestion accuracy. Our model’s suggestion accuracy is actually comparable to a state-of-the-art, semantically augmented language model; but it is simpler and easier to implement. Our cache language model requires nothing beyond lexicalization, and thus is applicable to all programming languages.",
"title": ""
},
{
"docid": "65fd482ac37852214fc82b4bc05c6f72",
"text": "This paper examines important factors for link prediction in networks and provides a general, high-performance framework for the prediction task. Link prediction in sparse networks presents a significant challenge due to the inherent disproportion of links that can form to links that do form. Previous research has typically approached this as an unsupervised problem. While this is not the first work to explore supervised learning, many factors significant in influencing and guiding classification remain unexplored. In this paper, we consider these factors by first motivating the use of a supervised framework through a careful investigation of issues such as network observational period, generality of existing methods, variance reduction, topological causes and degrees of imbalance, and sampling approaches. We also present an effective flow-based predicting algorithm, offer formal bounds on imbalance in sparse network link prediction, and employ an evaluation method appropriate for the observed imbalance. Our careful consideration of the above issues ultimately leads to a completely general framework that outperforms unsupervised link prediction methods by more than 30% AUC.",
"title": ""
},
{
"docid": "8cb33cec31601b096ff05426e5ffa848",
"text": "Efficient actuation control of flapping-wing microrobots requires a low-power frequency reference with good absolute accuracy. To meet this requirement, we designed a fully-integrated 10MHz relaxation oscillator in a 40nm CMOS process. By adaptively biasing the continuous-time comparator, we are able to achieve a power consumption of 20μW, a 68% reduction to the conventional fixed bias design. A built-in self-calibration controller enables fast post-fabrication calibration of the clock frequency. Measurements show a frequency drift of 1.2% as the battery voltage changes from 3V to 4.1V.",
"title": ""
},
{
"docid": "5be55ce7d8f97689bf54028063ba63d7",
"text": "Early diagnosis, playing an important role in preventing progress and treating the Alzheimer's disease (AD), is based on classification of features extracted from brain images. The features have to accurately capture main AD-related variations of anatomical brain structures, such as, e.g., ventricles size, hippocampus shape, cortical thickness, and brain volume. This paper proposed to predict the AD with a deep 3D convolutional neural network (3D-CNN), which can learn generic features capturing AD biomarkers and adapt to different domain datasets. The 3D-CNN is built upon a 3D convolutional autoencoder, which is pre-trained to capture anatomical shape variations in structural brain MRI scans. Fully connected upper layers of the 3D-CNN are then fine-tuned for each task-specific AD classification. Experiments on the CADDementia MRI dataset with no skull-stripping preprocessing have shown our 3D-CNN outperforms several conventional classifiers by accuracy. Abilities of the 3D-CNN to generalize the features learnt and adapt to other domains have been validated on the ADNI dataset.",
"title": ""
},
{
"docid": "d3afe3be6debe665f442367b17fa4e28",
"text": "It is common practice for developers of user-facing software to transform a mock-up of a graphical user interface (GUI) into code. This process takes place both at an application’s inception and in an evolutionary context as GUI changes keep pace with evolving features. Unfortunately, this practice is challenging and time-consuming. In this paper, we present an approach that automates this process by enabling accurate prototyping of GUIs via three tasks: detection, classification, and assembly. First, logical components of a GUI are detected from a mock-up artifact using either computer vision techniques or mock-up metadata. Then, software repository mining, automated dynamic analysis, and deep convolutional neural networks are utilized to accurately classify GUI-components into domain-specific types (e.g., toggle-button). Finally, a data-driven, K-nearest-neighbors algorithm generates a suitable hierarchical GUI structure from which a prototype application can be automatically assembled. We implemented this approach for Android in a system called REDRAW. Our evaluation illustrates that REDRAW achieves an average GUI-component classification accuracy of 91% and assembles prototype applications that closely mirror target mock-ups in terms of visual affinity while exhibiting reasonable code structure. Interviews with industrial practitioners illustrate ReDraw’s potential to improve real development workflows.",
"title": ""
},
{
"docid": "792694fbea0e2e49a454ffd77620da47",
"text": "Technology is increasingly shaping our social structures and is becoming a driving force in altering human biology. Besides, human activities already proved to have a significant impact on the Earth system which in turn generates complex feedback loops between social and ecological systems. Furthermore, since our species evolved relatively fast from small groups of hunter-gatherers to large and technology-intensive urban agglomerations, it is not a surprise that the major institutions of human society are no longer fit to cope with the present complexity. In this note we draw foundational parallelisms between neurophysiological systems and ICT-enabled social systems, discussing how frameworks rooted in biology and physics could provide heuristic value in the design of evolutionary systems relevant to politics and economics. In this regard we highlight how the governance of emerging technology (i.e. nanotechnology, biotechnology, information technology, and cognitive science), and the one of climate change both presently confront us with a number of connected challenges. In particular: historically high level of inequality; the co-existence of growing multipolar cultural systems in an unprecedentedly connected world; the unlikely reaching of the institutional agreements required to deviate abnormal trajectories of development. We argue that wise general solutions to such interrelated issues should embed the deep understanding of how to elicit mutual incentives in the socio-economic subsystems of Earth system in order to jointly concur to a global utility function (e.g. avoiding the reach of planetary boundaries and widespread social unrest). We leave some open questions on how techno-social systems can effectively learn and adapt with respect to our understanding of geopolitical",
"title": ""
},
{
"docid": "db3d1a63d5505693bd6677e9b268e8d4",
"text": "This paper presents a system for calibrating the extrinsic parameters and timing offsets of an array of cameras, 3-D lidars, and global positioning system/inertial navigation system sensors, without the requirement of any markers or other calibration aids. The aim of the approach is to achieve calibration accuracies comparable with state-of-the-art methods, while requiring less initial information about the system being calibrated and thus being more suitable for use by end users. The method operates by utilizing the motion of the system being calibrated. By estimating the motion each individual sensor observes, an estimate of the extrinsic calibration of the sensors is obtained. Our approach extends standard techniques for motion-based calibration by incorporating estimates of the accuracy of each sensor's readings. This yields a probabilistic approach that calibrates all sensors simultaneously and facilitates the estimation of the uncertainty in the final calibration. In addition, we combine this motion-based approach with appearance information. This gives an approach that requires no initial calibration estimate and takes advantage of all available alignment information to provide an accurate and robust calibration for the system. The new framework is validated with datasets collected with different platforms and different sensors' configurations, and compared with state-of-the-art approaches.",
"title": ""
},
{
"docid": "cc12bd6dcd844c49c55f4292703a241b",
"text": "Eleven cases of sudden death of men restrained in a prone position by police officers are reported. Nine of the men were hogtied, one was tied to a hospital gurney, and one was manually held prone. All subjects were in an excited delirious state when restrained. Three were psychotic, whereas the others were acutely delirious from drugs (six from cocaine, one from methamphetamine, and one from LSD). Two were shocked with stun guns shortly before death. The literature is reviewed and mechanisms of death are discussed.",
"title": ""
},
{
"docid": "b4c8ebb06c527c81e568c82afb2d4b6d",
"text": "Kriging or Gaussian Process Regression is applied in many fields as a non-linear regression model as well as a surrogate model in the field of evolutionary computation. However, the computational and space complexity of Kriging, that is cubic and quadratic in the number of data points respectively, becomes a major bottleneck with more and more data available nowadays. In this paper, we propose a general methodology for the complexity reduction, called cluster Kriging, where the whole data set is partitioned into smaller clusters and multiple Kriging models are built on top of them. In addition, four Kriging approximation algorithms are proposed as candidate algorithms within the new framework. Each of these algorithms can be applied to much larger data sets while maintaining the advantages and power of Kriging. The proposed algorithms are explained in detail and compared empirically against a broad set of existing state-of-the-art Kriging approximation methods on a welldefined testing framework. According to the empirical study, the proposed algorithms consistently outperform the existing algorithms. Moreover, some practical suggestions are provided for using the proposed algorithms.",
"title": ""
},
{
"docid": "485b4a75726109838b1b8ed377e68ece",
"text": "Item recommendation is a personalized ranking task. To this end, many recommender systems optimize models with pairwise ranking objectives, such as the Bayesian Personalized Ranking (BPR). Using matrix Factorization (MF) - the most widely used model in recommendation - as a demonstration, we show that optimizing it with BPR leads to a recommender model that is not robust. In particular, we find that the resultant model is highly vulnerable to adversarial perturbations on its model parameters, which implies the possibly large error in generalization. To enhance the robustness of a recommender model and thus improve its generalization performance, we propose a new optimization framework, namely Adversarial Personalized Ranking (APR). In short, our APR enhances the pairwise ranking method BPR by performing adversarial training. It can be interpreted as playing a minimax game, where the minimization of the BPR objective function meanwhile defends an adversary, which adds adversarial perturbations on model parameters to maximize the BPR objective function. To illustrate how it works, we implement APR on MF by adding adversarial perturbations on the embedding vectors of users and items. Extensive experiments on three public real-world datasets demonstrate the effectiveness of APR - by optimizing MF with APR, it outperforms BPR with a relative improvement of 11.2% on average and achieves state-of-the-art performance for item recommendation. Our implementation is available at: \\urlhttps://github.com/hexiangnan/adversarial_personalized_ranking.",
"title": ""
},
{
"docid": "0b74c1fbfe8ad31d2c73c8db6ce8b411",
"text": "To investigate fast human reaching movements in 3D, we asked 11 right-handed persons to catch a tennis ball while we tracked the movements of their arms. To ensure consistent trajectories of the ball, we used a catapult to throw the ball from three different positions. Tangential velocity profiles of the hand were in general bell-shaped and hand movements in 3D coincided with well known results for 2D point-to-point movements such as minimum jerk theory or the 2/3rd power law. Furthermore, two phases, consisting of fast reaching and slower fine movements at the end of hand placement could clearly be seen. The aim of this study was to find a way to generate human-like (catching) trajectories for a humanoid robot.",
"title": ""
}
] | scidocsrr |
8c2975ba60444927e58c923e7e5a9a71 | Empirical evidence for resource-rational anchoring and adjustment. | [
{
"docid": "637a7d7e0c33b6f63f17f9ec77add5a6",
"text": "In spite of its familiar phenomenology, the mechanistic basis for mental effort remains poorly understood. Although most researchers agree that mental effort is aversive and stems from limitations in our capacity to exercise cognitive control, it is unclear what gives rise to those limitations and why they result in an experience of control as costly. The presence of these control costs also raises further questions regarding how best to allocate mental effort to minimize those costs and maximize the attendant benefits. This review explores recent advances in computational modeling and empirical research aimed at addressing these questions at the level of psychological process and neural mechanism, examining both the limitations to mental effort exertion and how we manage those limited cognitive resources. We conclude by identifying remaining challenges for theoretical accounts of mental effort as well as possible applications of the available findings to understanding the causes of and potential solutions for apparent failures to exert the mental effort required of us.",
"title": ""
},
{
"docid": "68477e8a53020dd0b98014a6eab96255",
"text": "This article reviews a diverse set of proposals for dual processing in higher cognition within largely disconnected literatures in cognitive and social psychology. All these theories have in common the distinction between cognitive processes that are fast, automatic, and unconscious and those that are slow, deliberative, and conscious. A number of authors have recently suggested that there may be two architecturally (and evolutionarily) distinct cognitive systems underlying these dual-process accounts. However, it emerges that (a) there are multiple kinds of implicit processes described by different theorists and (b) not all of the proposed attributes of the two kinds of processing can be sensibly mapped on to two systems as currently conceived. It is suggested that while some dual-process theories are concerned with parallel competing processes involving explicit and implicit knowledge systems, others are concerned with the influence of preconscious processes that contextualize and shape deliberative reasoning and decision-making.",
"title": ""
}
] | [
{
"docid": "6b3cdd024b6232e5226cae2c15463509",
"text": "Blended learning involves the combination of two fields of concern: education and educational technology. To gain the scholarly recognition from educationists, it is necessary to revisit its models and educational theory underpinned. This paper respond to this issue by reviewing models related to blended learning based on two prominent educational theorists, Maslow’s and Vygotsky’s view. Four models were chosen due to their holistic ideas or vast citations related to blended learning: (1) E-Moderation Model emerging from Open University of UK; (2) Learning Ecology Model by Sun Microsoft System; (3) Blended Learning Continuum in University of Glamorgan; and (4) Inquirybased Framework by Garrison and Vaughan. The discussion of each model concerning pedagogical impact to learning and teaching are made. Critical review of the models in accordance to Maslow or Vygotsky is argued. Such review is concluded with several key principles for the design and practice in",
"title": ""
},
{
"docid": "65840e476736336c9cb0fa18f8321492",
"text": "Saliency methods aim to explain the predictions of deep neural networks. These methods lack reliability when the explanation is sensitive to factors that do not contribute to the model prediction. We use a simple and common pre-processing step —adding a constant shift to the input data— to show that a transformation with no effect on the model can cause numerous methods to incorrectly attribute. In order to guarantee reliability, we posit that methods should fulfill input invariance, the requirement that a saliency method mirror the sensitivity of the model with respect to transformations of the input. We show, through several examples, that saliency methods that do not satisfy input invariance result in misleading attribution.",
"title": ""
},
{
"docid": "f82972fcda26b339eb078bbcaad26cdc",
"text": "Colorectal cancer (CRC) shows variable underlying molecular changes with two major mechanisms of genetic instability: chromosomal instability and microsatellite instability. This review aims to delineate the different pathways of colorectal carcinogenesis and provide an overview of the most recent advances in molecular pathological classification systems for colorectal cancer. Two molecular pathological classification systems for CRC have recently been proposed. Integrated molecular analysis by The Cancer Genome Atlas project is based on a wide-ranging genomic and transcriptomic characterisation study of CRC using array-based and sequencing technologies. This approach classified CRC into two major groups consistent with previous classification systems: (1) ∼16 % hypermutated cancers with either microsatellite instability (MSI) due to defective mismatch repair (∼13 %) or ultramutated cancers with DNA polymerase epsilon proofreading mutations (∼3 %); and (2) ∼84 % non-hypermutated, microsatellite stable (MSS) cancers with a high frequency of DNA somatic copy number alterations, which showed common mutations in APC, TP53, KRAS, SMAD4, and PIK3CA. The recent Consensus Molecular Subtypes (CMS) Consortium analysing CRC expression profiling data from multiple studies described four CMS groups: almost all hypermutated MSI cancers fell into the first category CMS1 (MSI-immune, 14 %) with the remaining MSS cancers subcategorised into three groups of CMS2 (canonical, 37 %), CMS3 (metabolic, 13 %) and CMS4 (mesenchymal, 23 %), with a residual unclassified group (mixed features, 13 %). Although further research is required to validate these two systems, they may be useful for clinical trial designs and future post-surgical adjuvant treatment decisions, particularly for tumours with aggressive features or predicted responsiveness to immune checkpoint blockade.",
"title": ""
},
{
"docid": "6daa93f2a7cfaaa047ecdc04fb802479",
"text": "Facial landmark localization is important to many facial recognition and analysis tasks, such as face attributes analysis, head pose estimation, 3D face modelling, and facial expression analysis. In this paper, we propose a new approach to localizing landmarks in facial image by deep convolutional neural network (DCNN). We make two enhancements on the CNN to adapt it to the feature localization task as follows. Firstly, we replace the commonly used max pooling by depth-wise convolution to obtain better localization performance. Secondly, we define a response map for each facial points as a 2D probability map indicating the presence likelihood, and train our model with a KL divergence loss. To obtain robust localization results, our approach first takes the expectations of the response maps of Enhanced CNN and then applies auto-encoder model to the global shape vector, which is effective to rectify the outlier points by the prior global landmark configurations. The proposed ECNN method achieves 5.32% mean error on the experiments on the 300-W dataset, which is comparable to the state-of-the-art performance on this standard benchmark, showing the effectiveness of our methods.",
"title": ""
},
{
"docid": "e9046bfaf5488138ca5c2ff0067646a8",
"text": "In this paper we consider several new versions of approximate string matching with gaps. The main characteristic of these new versions is the existence of gaps in the matching of a given pattern in a text. Algorithms are devised for each version and their time and space complexities are stated. These specific versions of approximate string matching have various applications in computerized music analysis. CR Classification: F.2.2",
"title": ""
},
{
"docid": "ab07e92f052a03aac253fabadaea4ab3",
"text": "As news is increasingly accessed on smartphones and tablets, the need for personalising news app interactions is apparent. We report a series of three studies addressing key issues in the development of adaptive news app interfaces. We first surveyed users' news reading preferences and behaviours; analysis revealed three primary types of reader. We then implemented and deployed an Android news app that logs users' interactions with the app. We used the logs to train a classifier and showed that it is able to reliably recognise a user according to their reader type. Finally we evaluated alternative, adaptive user interfaces for each reader type. The evaluation demonstrates the differential benefit of the adaptation for different users of the news app and the feasibility of adaptive interfaces for news apps.",
"title": ""
},
{
"docid": "3a32bb2494edefe8ea28a83dad1dc4c4",
"text": "Objective: The challenging task of heart rate (HR) estimation from the photoplethysmographic (PPG) signal, during intensive physical exercises, is tackled in this paper. Methods: The study presents a detailed analysis of a novel algorithm (WFPV) that exploits a Wiener filter to attenuate the motion artifacts, a phase vocoder to refine the HR estimate and user-adaptive post-processing to track the subject physiology. Additionally, an offline version of the HR estimation algorithm that uses Viterbi decoding is designed for scenarios that do not require online HR monitoring (WFPV+VD). The performance of the HR estimation systems is rigorously compared with existing algorithms on the publically available database of 23 PPG recordings. Results: On the whole dataset of 23 PPG recordings, the algorithms result in average absolute errors of 1.97 and 1.37 BPM in the online and offline modes, respectively. On the test dataset of 10 PPG recordings which were most corrupted with motion artifacts, WFPV has an error of 2.95 BPM on its own and 2.32 BPM in an ensemble with two existing algorithms. Conclusion: The error rate is significantly reduced when compared with the state-of-the art PPG-based HR estimation methods. Significance: The proposed system is shown to be accurate in the presence of strong motion artifacts and in contrast to existing alternatives has very few free parameters to tune. The algorithm has a low computational cost and can be used for fitness tracking and health monitoring in wearable devices. The MATLAB implementation of the algorithm is provided online.",
"title": ""
},
{
"docid": "59639429e45dc75e0b8db773d112f994",
"text": "Vector modulators are a key component in phased array antennas and communications systems. The paper describes a novel design methodology for a bi-directional, reflection-type balanced vector modulator using metal-oxide-semiconductor field-effect (MOS) transistors as active loads, which provides an improved constellation quality. The fabricated IC occupies 787 × 1325 μm2 and exhibits a minimum transmission loss of 9 dB and return losses better than 14 dB. As an application example, its use in a 16-QAM modulator is verified.",
"title": ""
},
{
"docid": "b2444538456800e84df8288f4a482775",
"text": "Thermoelectric generators (TEGs) provide a unique way for harvesting thermal energy. These devices are compact, durable, inexpensive, and scalable. Unfortunately, the conversion efficiency of TEGs is low. This requires careful design of energy harvesting systems including the interface circuitry between the TEG module and the load, with the purpose of minimizing power losses. In this paper, it is analytically shown that the traditional approach for estimating the internal resistance of TEGs may result in a significant loss of harvested power. This drawback comes from ignoring the dependence of the electrical behavior of TEGs on their thermal behavior. Accordingly, a systematic method for accurately determining the TEG input resistance is presented. Next, through a case study on automotive TEGs, it is shown that compared to prior art, more than 11% of power losses in the interface circuitry that lies between the TEG and the electrical load can be saved by the proposed modeling technique. In addition, it is demonstrated that the traditional approach would have resulted in a deviation from the target regulated voltage by as much as 59%.",
"title": ""
},
{
"docid": "cb85db604bf21751766daf3751dd73bd",
"text": "The heterogeneous cloud radio access network (H-CRAN) is a promising paradigm that incorporates cloud computing into heterogeneous networks (HetNets), thereby taking full advantage of cloud radio access networks (C-RANs) and HetNets. Characterizing cooperative beamforming with fronthaul capacity and queue stability constraints is critical for multimedia applications to improve the energy efficiency (EE) in H-CRANs. An energy-efficient optimization objective function with individual fronthaul capacity and intertier interference constraints is presented in this paper for queue-aware multimedia H-CRANs. To solve this nonconvex objective function, a stochastic optimization problem is reformulated by introducing the general Lyapunov optimization framework. Under the Lyapunov framework, this optimization problem is equivalent to an optimal network-wide cooperative beamformer design algorithm with instantaneous power, average power, and intertier interference constraints, which can be regarded as a weighted sum EE maximization problem and solved by a generalized weighted minimum mean-square error approach. The mathematical analysis and simulation results demonstrate that a tradeoff between EE and queuing delay can be achieved, and this tradeoff strictly depends on the fronthaul constraint.",
"title": ""
},
{
"docid": "c48d0c94d3e97661cc2c944cc4b61813",
"text": "CIPO is the very “tip of the iceberg” of functional gastrointestinal disorders, being a rare and frequently misdiagnosed condition characterized by an overall poor outcome. Diagnosis should be based on clinical features, natural history and radiologic findings. There is no cure for CIPO and management strategies include a wide array of nutritional, pharmacologic, and surgical options which are directed to minimize malnutrition, promote gut motility and reduce complications of stasis (ie, bacterial overgrowth). Pain may become so severe to necessitate major analgesic drugs. Underlying causes of secondary CIPO should be thoroughly investigated and, if detected, treated accordingly. Surgery should be indicated only in a highly selected, well characterized subset of patients, while isolated intestinal or multivisceral transplantation is a rescue therapy only in those patients with intestinal failure unsuitable for or unable to continue with TPN/HPN. Future perspectives in CIPO will be directed toward an accurate genomic/proteomic phenotying of these rare, challenging patients. Unveiling causative mechanisms of neuro-ICC-muscular abnormalities will pave the way for targeted therapeutic options for patients with CIPO.",
"title": ""
},
{
"docid": "f395e3d72341bd20e1a16b97259bad7d",
"text": "Malicious software in form of Internet worms, computer viru ses, and Trojan horses poses a major threat to the security of network ed systems. The diversity and amount of its variants severely undermine the effectiveness of classical signature-based detection. Yet variants of malware f milies share typical behavioral patternsreflecting its origin and purpose. We aim to exploit these shared patterns for classification of malware and propose a m thod for learning and discrimination of malware behavior. Our method proceed s in three stages: (a) behavior of collected malware is monitored in a sandbox envi ro ment, (b) based on a corpus of malware labeled by an anti-virus scanner a malware behavior classifieris trained using learning techniques and (c) discriminativ e features of the behavior models are ranked for explanation of classifica tion decisions. Experiments with di fferent heterogeneous test data collected over several month s using honeypots demonstrate the e ffectiveness of our method, especially in detecting novel instances of malware families previously not recognized by commercial anti-virus software.",
"title": ""
},
{
"docid": "30d0ff3258decd5766d121bf97ae06d4",
"text": "In this paper, we present a new image forgery detection method based on deep learning technique, which utilizes a convolutional neural network (CNN) to automatically learn hierarchical representations from the input RGB color images. The proposed CNN is specifically designed for image splicing and copy-move detection applications. Rather than a random strategy, the weights at the first layer of our network are initialized with the basic high-pass filter set used in calculation of residual maps in spatial rich model (SRM), which serves as a regularizer to efficiently suppress the effect of image contents and capture the subtle artifacts introduced by the tampering operations. The pre-trained CNN is used as patch descriptor to extract dense features from the test images, and a feature fusion technique is then explored to obtain the final discriminative features for SVM classification. The experimental results on several public datasets show that the proposed CNN based model outperforms some state-of-the-art methods.",
"title": ""
},
{
"docid": "5e6f9014a07e7b2bdfd255410a73b25f",
"text": "Context: Offshore software development outsourcing is a modern business strategy for developing high quality software at low cost. Objective: The objective of this research paper is to identify and analyse factors that are important in terms of the competitiveness of vendor organisations in attracting outsourcing projects. Method: We performed a systematic literature review (SLR) by applying our customised search strings which were derived from our research questions. We performed all the SLR steps, such as the protocol development, initial selection, final selection, quality assessment, data extraction and data synthesis. Results: We have identified factors such as cost-saving, skilled human resource, appropriate infrastructure, quality of product and services, efficient outsourcing relationships management, and an organisation’s track record of successful projects which are generally considered important by the outsourcing clients. Our results indicate that appropriate infrastructure, cost-saving, and skilled human resource are common in three continents, namely Asia, North America and Europe. We identified appropriate infrastructure, cost-saving, and quality of products and services as being common in three types of organisations (small, medium and large). We have also identified four factors-appropriate infrastructure, cost-saving, quality of products and services, and skilled human resource as being common in the two decades (1990–1999 and 2000–mid 2008). Conclusions: Cost-saving should not be considered as the driving factor in the selection process of software development outsourcing vendors. Vendors should rather address other factors in order to compete in as sk the OSDO business, such and services.",
"title": ""
},
{
"docid": "3b7ac492add26938636ae694ebb14b65",
"text": "This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (OO) design metrics introduced by [Chidamber&Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Li&Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to clas es. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber&Kemerer’s OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than “traditional” code metrics, which can only be collected at a later phase of the software development processes. Key-words: Object-Oriented Design Metrics; Error Prediction Model; Object-Oriented Software Development; C++ Programming Language. * V. Basili and W. Melo are with the University of Maryland, Institute for Advanced Computer Studies and Computer Science Dept., A. V. Williams Bldg., College Park, MD 20742 USA. {basili | melo}@cs.umd.edu L. Briand is with the CRIM, 1801 McGill College Av., Montréal (Québec), H3A 2N4, Canada. [email protected] Technical Report, Univ. of Maryland, Dep. of Computer Science, College Park, MD, 20742 USA. April 1995. CS-TR-3443 2 UMIACS-TR-95-40 1 . Introduction",
"title": ""
},
{
"docid": "f0c4c1a82eee97d19012421614ee5d5f",
"text": "Although the widespread use of gaming for leisure purposes has been well documented, the use of games to support cultural heritage purposes, such as historical teaching and learning, or for enhancing museum visits, has been less well considered. The state-of-the-art in serious game technology is identical to that of the state-of-the-art in entertainment games technology. As a result the field of serious heritage games concerns itself with recent advances in computer games, real-time computer graphics, virtual and augmented reality and artificial intelligence. On the other hand, the main strengths of serious gaming applications may be generalised as being in the areas of communication, visual expression of information, collaboration mechanisms, interactivity and entertainment. In this report, we will focus on the state-of-the-art with respect to the theories, methods and technologies used in serious heritage games. We provide an overview of existing literature of relevance to the domain, discuss the strengths and weaknesses of the described methods and point out unsolved problems and challenges. In addition, several case studies illustrating the application of methods and technologies used in cultural heritage are presented.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
},
{
"docid": "f53dc3977a9e8c960e0232ef59c0e7fd",
"text": "The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.",
"title": ""
},
{
"docid": "69f413d247e88022c3018b2dee1b53e2",
"text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "9409922d01a00695745939b47e6446a0",
"text": "The Suricata intrusion-detection system for computer-network monitoring has been advanced as an open-source improvement on the popular Snort system that has been available for over a decade. Suricata includes multi-threading to improve processing speed beyond Snort. Previous work comparing the two products has not used a real-world setting. We did this and evaluated the speed, memory requirements, and accuracy of the detection engines in three kinds of experiments: (1) on the full traffic of our school as observed on its \" backbone\" in real time, (2) on a supercomputer with packets recorded from the backbone, and (3) in response to malicious packets sent by a red-teaming product. We used the same set of rules for both products with a few small exceptions where capabilities were missing. We conclude that Suricata can handle larger volumes of traffic than Snort with similar accuracy, and that its performance scaled roughly linearly with the number of processors up to 48. We observed no significant speed or accuracy advantage of Suricata over Snort in its current state, but it is still being developed. Our methodology should be useful for comparing other intrusion-detection products.",
"title": ""
}
] | scidocsrr |
6b59d358b108eda94fcea4c866c3c13e | Energy-Efficient Power Control: A Look at 5G Wireless Technologies | [
{
"docid": "6a2d7b29a0549e99cdd31dbd2a66fc0a",
"text": "We consider data transmissions in a full duplex (FD) multiuser multiple-input multiple-output (MU-MIMO) system, where a base station (BS) bidirectionally communicates with multiple users in the downlink (DL) and uplink (UL) channels on the same system resources. The system model of consideration has been thought to be impractical due to the self-interference (SI) between transmit and receive antennas at the BS. Interestingly, recent advanced techniques in hardware design have demonstrated that the SI can be suppressed to a degree that possibly allows for FD transmission. This paper goes one step further in exploring the potential gains in terms of the spectral efficiency (SE) and energy efficiency (EE) that can be brought by the FD MU-MIMO model. Toward this end, we propose low-complexity designs for maximizing the SE and EE, and evaluate their performance numerically. For the SE maximization problem, we present an iterative design that obtains a locally optimal solution based on a sequential convex approximation method. In this way, the nonconvex precoder design problem is approximated by a convex program at each iteration. Then, we propose a numerical algorithm to solve the resulting convex program based on the alternating and dual decomposition approaches, where analytical expressions for precoders are derived. For the EE maximization problem, using the same method, we first transform it into a concave-convex fractional program, which then can be reformulated as a convex program using the parametric approach. We will show that the resulting problem can be solved similarly to the SE maximization problem. Numerical results demonstrate that, compared to a half duplex system, the FD system of interest with the proposed designs achieves a better SE and a slightly smaller EE when the SI is small.",
"title": ""
}
] | [
{
"docid": "88488d730255a534d3255eb5884a69a6",
"text": "As Computer curricula have developed, Human-Computer Interaction has gradually become part of many of those curricula and the recent ACM/IEEE report on the core of Computing Science and Engineering, includes HumanComputer Interaction as one of the fundamental sub-areas that should be addressed by any such curricula. However, both technology and Human-Computer Interaction are evolving rapidly, thus a continuous effort is needed to maintain a program, bibliography and a set of practical assignments up to date and adapted to the current technology. This paper briefly presents an introductory course on Human-Computer Interaction offered to Electrical and Computer Engineering students at the University of Aveiro.",
"title": ""
},
{
"docid": "4b6b9539468db238d92e9762b2650b61",
"text": "The previous chapters gave an insightful introduction into the various facets of Business Process Management. We now share a rich understanding of the essential ideas behind designing and managing processes for organizational purposes. We have also learned about the various streams of research and development that have influenced contemporary BPM. As a matter of fact, BPM has become a holistic management discipline. As such, it requires that a plethora of facets needs to be addressed for its successful und sustainable application. This chapter provides a framework that consolidates and structures the essential factors that constitute BPM as a whole. Drawing from research in the field of maturity models, we suggest six core elements of BPM: strategic alignment, governance, methods, information technology, people, and culture. These six elements serve as the structure for this BPM Handbook. 1 Why Looking for BPM Core Elements? A recent global study by Gartner confirmed the significance of BPM with the top issue for CIOs identified for the sixth year in a row being the improvement of business processes (Gartner 2010). While such an interest in BPM is beneficial for professionals in this field, it also increases the expectations and the pressure to deliver on the promises of the process-centered organization. This context demands a sound understanding of how to approach BPM and a framework that decomposes the complexity of a holistic approach such as Business Process Management. A framework highlighting essential building blocks of BPM can particularly serve the following purposes: M. Rosemann (*) Information Systems Discipline, Faculty of Science and Technology, Queensland University of Technology, Brisbane, Australia e-mail: [email protected] J. vom Brocke and M. Rosemann (eds.), Handbook on Business Process Management 1, International Handbooks on Information Systems, DOI 10.1007/978-3-642-00416-2_5, # Springer-Verlag Berlin Heidelberg 2010 107 l Project and Program Management: How can all relevant issues within a BPM approach be safeguarded? When implementing a BPM initiative, either as a project or as a program, is it essential to individually adjust the scope and have different BPM flavors in different areas of the organization? What competencies are relevant? What approach fits best with the culture and BPM history of the organization? What is it that needs to be taken into account “beyond modeling”? People for one thing play an important role like Hammer has pointed out in his chapter (Hammer 2010), but what might be further elements of relevance? In order to find answers to these questions, a framework articulating the core elements of BPM provides invaluable advice. l Vendor Management: How can service and product offerings in the field of BPM be evaluated in terms of their overall contribution to successful BPM? What portfolio of solutions is required to address the key issues of BPM, and to what extent do these solutions need to be sourced from outside the organization? There is, for example, a large list of providers of process-aware information systems, change experts, BPM training providers, and a variety of BPM consulting services. How can it be guaranteed that these offerings cover the required capabilities? In fact, the vast number of BPM offerings does not meet the requirements as distilled in this Handbook; see for example, Hammer (2010), Davenport (2010), Harmon (2010), and Rummler and Ramias (2010). It is also for the purpose of BPM make-or-buy decisions and the overall vendor management, that a framework structuring core elements of BPM is highly needed. l Complexity Management: How can the complexity that results from the holistic and comprehensive nature of BPM be decomposed so that it becomes manageable? How can a number of coexisting BPM initiatives within one organization be synchronized? An overarching picture of BPM is needed in order to provide orientation for these initiatives. Following a “divide-and-conquer” approach, a shared understanding of the core elements can help to focus on special factors of BPM. For each element, a specific analysis could be carried out involving experts from the various fields. Such an assessment should be conducted by experts with the required technical, business-oriented, and socio-cultural know-how. l Standards Management: What elements of BPM need to be standardized across the organization? What BPM elements need to be mandated for every BPM initiative? What BPM elements can be configured individually within each initiative? A comprehensive framework allows an element-by-element decision for the degrees of standardization that are required. For example, it might be decided that a company-wide process model repository will be “enforced” on all BPM initiatives, while performance management and cultural change will be decentralized activities. l Strategy Management: What is the BPM strategy of the organization? How does this strategy materialize in a BPM roadmap? How will the naturally limited attention of all involved stakeholders be distributed across the various BPM elements? How do we measure progression in a BPM initiative (“BPM audit”)? 108 M. Rosemann and J. vom Brocke",
"title": ""
},
{
"docid": "8de530a30b8352e36b72f3436f47ffb2",
"text": "This paper presents a Bayesian optimization method with exponential convergencewithout the need of auxiliary optimization and without the δ-cover sampling. Most Bayesian optimization methods require auxiliary optimization: an additional non-convex global optimization problem, which can be time-consuming and hard to implement in practice. Also, the existing Bayesian optimization method with exponential convergence [ 1] requires access to the δ-cover sampling, which was considered to be impractical [ 1, 2]. Our approach eliminates both requirements and achieves an exponential convergence rate.",
"title": ""
},
{
"docid": "7190c91917d1e1280010c66139837568",
"text": "GPUs and accelerators have become ubiquitous in modern supercomputing systems. Scientific applications from a wide range of fields are being modified to take advantage of their compute power. However, data movement continues to be a critical bottleneck in harnessing the full potential of a GPU. Data in the GPU memory has to be moved into the host memory before it can be sent over the network. MPI libraries like MVAPICH2 have provided solutions to alleviate this bottleneck using techniques like pipelining. GPUDirect RDMA is a feature introduced in CUDA 5.0, that allows third party devices like network adapters to directly access data in GPU device memory, over the PCIe bus. NVIDIA has partnered with Mellanox to make this solution available for InfiniBand clusters. In this paper, we evaluate the first version of GPUDirect RDMA for InfiniBand and propose designs in MVAPICH2 MPI library to efficiently take advantage of this feature. We highlight the limitations posed by current generation architectures in effectively using GPUDirect RDMA and address these issues through novel designs in MVAPICH2. To the best of our knowledge, this is the first work to demonstrate a solution for internode GPU-to-GPU MPI communication using GPUDirect RDMA. Results show that the proposed designs improve the latency of internode GPU-to-GPU communication using MPI Send/MPI Recv by 69% and 32% for 4Byte and 128KByte messages, respectively. The designs boost the uni-directional bandwidth achieved using 4KByte and 64KByte messages by 2x and 35%, respectively. We demonstrate the impact of the proposed designs using two end-applications: LBMGPU and AWP-ODC. They improve the communication times in these applications by up to 35% and 40%, respectively.",
"title": ""
},
{
"docid": "5c05ad44ac2bf3fb26cea62d563435f8",
"text": "We investigate the training and performance of generative adversarial networks using the Maximum Mean Discrepancy (MMD) as critic, termed MMD GANs. As our main theoretical contribution, we clarify the situation with bias in GAN loss functions raised by recent work: we show that gradient estimators used in the optimization process for both MMD GANs and Wasserstein GANs are unbiased, but learning a discriminator based on samples leads to biased gradients for the generator parameters. We also discuss the issue of kernel choice for the MMD critic, and characterize the kernel corresponding to the energy distance used for the Cramér GAN critic. Being an integral probability metric, the MMD benefits from training strategies recently developed for Wasserstein GANs. In experiments, the MMD GAN is able to employ a smaller critic network than the Wasserstein GAN, resulting in a simpler and faster-training algorithm with matching performance. We also propose an improved measure of GAN convergence, the Kernel Inception Distance, and show how to use it to dynamically adapt learning rates during GAN training.",
"title": ""
},
{
"docid": "194bea0d713d5d167e145e43b3c8b4e2",
"text": "Users can enjoy personalized services provided by various context-aware applications that collect users' contexts through sensor-equipped smartphones. Meanwhile, serious privacy concerns arise due to the lack of privacy preservation mechanisms. Currently, most mechanisms apply passive defense policies in which the released contexts from a privacy preservation system are always real, leading to a great probability with which an adversary infers the hidden sensitive contexts about the users. In this paper, we apply a deception policy for privacy preservation and present a novel technique, FakeMask, in which fake contexts may be released to provably preserve users' privacy. The output sequence of contexts by FakeMask can be accessed by the untrusted context-aware applications or be used to answer queries from those applications. Since the output contexts may be different from the original contexts, an adversary has greater difficulty in inferring the real contexts. Therefore, FakeMask limits what adversaries can learn from the output sequence of contexts about the user being in sensitive contexts, even if the adversaries are powerful enough to have the knowledge about the system and the temporal correlations among the contexts. The essence of FakeMask is a privacy checking algorithm which decides whether to release a fake context for the current context of the user. We present a novel privacy checking algorithm and an efficient one to accelerate the privacy checking process. Extensive evaluation experiments on real smartphone context traces of users demonstrate the improved performance of FakeMask over other approaches.",
"title": ""
},
{
"docid": "b04ae3842293f5f81433afbaa441010a",
"text": "Rootkits Trojan virus, which can control attacked computers, delete import files and even steal password, are much popular now. Interrupt Descriptor Table (IDT) hook is rootkit technology in kernel level of Trojan. The paper makes deeply analysis on the IDT hooks handle procedure of rootkit Trojan according to previous other researchers methods. We compare its IDT structure and programs to find how Trojan interrupt handler code can respond the interrupt vector request in both real address mode and protected address mode. Finally, we analyze the IDT hook detection methods of rootkits Trojan by Windbg or other professional tools.",
"title": ""
},
{
"docid": "d5bf84e6b391bee0bec00924ed788bf8",
"text": "In this paper, we explore the use of the Stellar Consensus Protocol (SCP) and its Federated Byzantine Agreement (FBA) algorithm for ensuring trust and reputation between federated, cloud-based platform instances (nodes) and their participants. Our approach is grounded on federated consensus mechanisms, which promise data quality managed through computational trust and data replication, without a centralized authority. We perform our experimentation on the ground of the NIMBLE cloud manufacturing platform, which is designed to support growth of B2B digital manufacturing communities and their businesses through federated platform services, managed by peer-to-peer networks. We discuss the message exchange flow between the NIMBLE application logic and Stellar consensus logic.",
"title": ""
},
{
"docid": "e4d58b9b8775f2a30bc15fceed9cd8bf",
"text": "Latency of interactive computer systems is a product of the processing, transport and synchronisation delays inherent to the components that create them. In a virtual environment (VE) system, latency is known to be detrimental to a user's sense of immersion, physical performance and comfort level. Accurately measuring the latency of a VE system for study or optimisation, is not straightforward. A number of authors have developed techniques for characterising latency, which have become progressively more accessible and easier to use. In this paper, we characterise these techniques. We describe a simple mechanical simulator designed to simulate a VE with various amounts of latency that can be finely controlled (to within 3ms). We develop a new latency measurement technique called Automated Frame Counting to assist in assessing latency using high speed video (to within 1ms). We use the mechanical simulator to measure the accuracy of Steed's and Di Luca's measurement techniques, proposing improvements where they may be made. We use the methods to measure latency of a number of interactive systems that may be of interest to the VE engineer, with a significant level of confidence. All techniques were found to be highly capable however Steed's Method is both accurate and easy to use without requiring specialised hardware.",
"title": ""
},
{
"docid": "87c3f3ab2c5c1e9a556ed6f467f613a9",
"text": "In this study, we apply learning-to-rank algorithms to design trading strategies using relative performance of a group of stocks based on investors’ sentiment toward these stocks. We show that learning-to-rank algorithms are effective in producing reliable rankings of the best and the worst performing stocks based on investors’ sentiment. More specifically, we use the sentiment shock and trend indicators introduced in the previous studies, and we design stock selection rules of holding long positions of the top 25% stocks and short positions of the bottom 25% stocks according to rankings produced by learning-to-rank algorithms. We then apply two learning-to-rank algorithms, ListNet and RankNet, in stock selection processes and test long-only and long-short portfolio selection strategies using 10 years of market and news sentiment data. Through backtesting of these strategies from 2006 to 2014, we demonstrate that our portfolio strategies produce risk-adjusted returns superior to the S&P500 index return, the hedge fund industry average performance HFRIEMN, and some sentiment-based approaches without learning-to-rank algorithm during the same period.",
"title": ""
},
{
"docid": "895f0424cb71c79b86ecbd11a4f2eb8e",
"text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.",
"title": ""
},
{
"docid": "c47b59ea14b86fa18e69074129af72ec",
"text": "Multiple networks naturally appear in numerous high-impact applications. Network alignment (i.e., finding the node correspondence across different networks) is often the very first step for many data mining tasks. Most, if not all, of the existing alignment methods are solely based on the topology of the underlying networks. Nonetheless, many real networks often have rich attribute information on nodes and/or edges. In this paper, we propose a family of algorithms FINAL to align attributed networks. The key idea is to leverage the node/edge attribute information to guide (topology-based) alignment process. We formulate this problem from an optimization perspective based on the alignment consistency principle, and develop effective and scalable algorithms to solve it. Our experiments on real networks show that (1) by leveraging the attribute information, our algorithms can significantly improve the alignment accuracy (i.e., up to a 30% improvement over the existing methods); (2) compared with the exact solution, our proposed fast alignment algorithm leads to a more than 10 times speed-up, while preserving a 95% accuracy; and (3) our on-query alignment method scales linearly, with an around 90% ranking accuracy compared with our exact full alignment method and a near real-time response time.",
"title": ""
},
{
"docid": "8d5759855079e2ddaab2e920b93ca2a3",
"text": "In a number of information security scenarios, human beings can be better than technical security measures at detecting threats. This is particularly the case when a threat is based on deception of the user rather than exploitation of a specific technical flaw, as is the case of spear-phishing, application spoofing, multimedia masquerading and other semantic social engineering attacks. Here, we put the concept of the human-as-a-security-sensor to the test with a first case study on a small number of participants subjected to different attacks in a controlled laboratory environment and provided with a mechanism to report these attacks if they spot them. A key challenge is to estimate the reliability of each report, which we address with a machine learning approach. For comparison, we evaluate the ability of known technical security countermeasures in detecting the same threats. This initial proof of concept study shows that the concept is viable.",
"title": ""
},
{
"docid": "0df3d30837edd0e7809ed77743a848db",
"text": "Many language processing tasks can be reduced to breaking the text into segments with prescribed properties. Such tasks include sentence splitting, tokenization, named-entity extraction, and chunking. We present a new model of text segmentation based on ideas from multilabel classification. Using this model, we can naturally represent segmentation problems involving overlapping and non-contiguous segments. We evaluate the model on entity extraction and noun-phrase chunking and show that it is more accurate for overlapping and non-contiguous segments, but it still performs well on simpler data sets for which sequential tagging has been the best method.",
"title": ""
},
{
"docid": "d168bdb3f1117aac53da1fbac0906887",
"text": "Enforcing open source licenses such as the GNU General Public License (GPL), analyzing a binary for possible vulnerabilities, and code maintenance are all situations where it is useful to be able to determine the source code provenance of a binary. While previous work has either focused on computing binary-to-binary similarity or source-to-source similarity, BinPro is the first work we are aware of to tackle the problem of source-to-binary similarity. BinPro can match binaries with their source code even without knowing which compiler was used to produce the binary, or what optimization level was used with the compiler. To do this, BinPro utilizes machine learning to compute optimal code features for determining binaryto-source similarity and a static analysis pipeline to extract and compute similarity based on those features. Our experiments show that on average BinPro computes a similarity of 81% for matching binaries and source code of the same applications, and an average similarity of 25% for binaries and source code of similar but different applications. This shows that BinPro’s similarity score is useful for determining if a binary was derived from a particular source code.",
"title": ""
},
{
"docid": "f7ef3c104fe6c5f082e7dd060a82c03e",
"text": "Research about the artificial muscle made of fishing lines or sewing threads, called the twisted and coiled polymer actuator (abbreviated as TCA in this paper) has collected many interests, recently. Since TCA has a specific power surpassing the human skeletal muscle theoretically, it is expected to be a new generation of the artificial muscle actuator. In order that the TCA is utilized as a useful actuator, this paper introduces the fabrication and the modeling of the temperature-controllable TCA. With an embedded micro thermistor, the TCA is able to measure temperature directly, and feedback control is realized. The safe range of the force and temperature for the continuous use of the TCA was identified through experiments, and the closed-loop temperature control is successfully performed without the breakage of TCA.",
"title": ""
},
{
"docid": "77812e38f7250bc23e5157554bb101bc",
"text": "PinOS is an extension of the Pin dynamic instrumentation framework for whole-system instrumentation, i.e., to instrument both kernel and user-level code. It achieves this by interposing between the subject system and hardware using virtualization techniques. Specifically, PinOS is built on top of the Xen virtual machine monitor with Intel VT technology to allow instrumentation of unmodified OSes. PinOS is based on software dynamic translation and hence can perform pervasive fine-grain instrumentation. By inheriting the powerful instrumentation API from Pin, plus introducing some new API for system-level instrumentation, PinOS can be used to write system-wide instrumentation tools for tasks like program analysis and architectural studies. As of today, PinOS can boot Linux on IA-32 in uniprocessor mode, and can instrument complex applications such as database and web servers.",
"title": ""
},
{
"docid": "98110985cd175f088204db452a152853",
"text": "We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, produces photo-realistic results that we validate via a perceptual user study.",
"title": ""
},
{
"docid": "0ff90bc5ff6ecbb5c9d89902fce1fa0a",
"text": "Improving a decision maker’s1 situational awareness of the cyber domain isn’t greatly different than enabling situation awareness in more traditional domains2. Situation awareness necessitates working with processes capable of identifying domain specific activities as well as processes capable of identifying activities that cross domains. These processes depend on the context of the environment, the domains, and the goals and interests of the decision maker but they can be defined to support any domain. This chapter will define situation awareness in its broadest sense, describe our situation awareness reference and process models, describe some of the applicable processes, and identify a set of metrics usable for measuring the performance of a capability supporting situation awareness. These techniques are independent of domain but this chapter will also describe how they apply to the cyber domain. 2.1 What is Situation Awareness (SA)? One of the challenges in working in this area is that there are a multitude of definitions and interpretations concerning the answer to this simple question. A keyword search (executed on 8 April 2009) of ‘situation awareness’ on Google yields over 18,000,000 links the first page of which ranged from a Wikipedia page through the importance of “SA while driving” and ends with a link to a free internet radio show. Also on this first search page are several links to publications by Dr. Mica Endsley whose work in SA is arguably providing a standard for SA definitions and George P. Tadda and John S. Salerno, Air Force Research Laboratory Rome NY 1 Decision maker is used very loosely to describe anyone who uses information to make decisions within a complex dynamic environment. This is necessary because, as will be discussed, situation awareness is unique and dependant on the environment being considered, the context of the decision to be made, and the user of the information. 2 Traditional domains could include land, air, or sea. S. Jajodia et al., (eds.), Cyber Situational Awareness, 15 Advances in Information Security 46, DOI 10.1007/978-1-4419-0140-8 2, c © Springer Science+Business Media, LLC 2010 16 George P. Tadda and John S. Salerno techniques particularly for dynamic environments. In [5], Dr. Endsley provides a general definition of SA in dynamic environments: “Situation awareness is the perception of the elements of the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future.” Also in [5], Endsley differentiates between situation awareness, “a state of knowledge”, and situation assessment, “process of achieving, acquiring, or maintaining SA.” This distinction becomes exceedingly important when trying to apply computer automation to SA. Since situation awareness is “a state of knowledge”, it resides primarily in the minds of humans (cognitive), while situation assessment as a process or set of processes lends itself to automated techniques. Endsley goes on to note that: “SA, decision making, and performance are different stages with different factors influencing them and with wholly different approaches for dealing with each of them; thus it is important to treat these constructs separately.” The “stages” that Endsley defines have a direct correlation with Boyd’s ubiquitous OODA loop with SA relating to Observe and Orient, decision making to Decide, and performance to Act. We’ll see these stages as well as Endsley’s three “levels” of SA (perception, comprehension, and projection) manifest themselves again throughout this discussion. As first mentioned, there are several definitions for SA, from the Army Field Manual 1-02 (September 2004), Situational Awareness is: “Knowledge and understanding of the current situation which promotes timely, relevant and accurate assessment of friendly, competitive and other operations within the battlespace in order to facilitate decision making. An informational perspective and skill that fosters an ability to determine quickly the context and relevance of events that are unfolding.”",
"title": ""
}
] | scidocsrr |
370c6f1eee3d5470541dfaf9052d800c | Regressing a 3D Face Shape from a Single Image | [
{
"docid": "9ecb74866ca42b7fd559145deaed52a4",
"text": "We present an efficient and robust method of locating a set of feature points in an object of interest. From a training set we construct a joint model of the appearance of each feature together with their relative positions. The model is fitted to an unseen image in an iterative manner by generating templates using the joint model and the current parameter estimates, correlating the templates with the target image to generate response images and optimising the shape parameters so as to maximise the sum of responses. The appearance model is similar to that used in the Active Appearance Models (AAM) [T.F. Cootes, G.J. Edwards, C.J. Taylor, Active appearance models, in: Proceedings of the 5th European Conference on Computer Vision 1998, vol. 2, Freiburg, Germany, 1998.]. However in our approach the appearance model is used to generate likely feature templates, instead of trying to approximate the image pixels directly. We show that when applied to a wide range of data sets, our Constrained Local Model (CLM) algorithm is more robust and more accurate than the AAM search method, which relies on the image reconstruction error to update the model parameters. We demonstrate improved localisation accuracy on photographs of human faces, magnetic resonance (MR) images of the brain and a set of dental panoramic tomograms. We also show improved tracking performance on a challenging set of in car video sequences. 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "1d73817f8b1b54a82308106ee526a62b",
"text": "To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of markers and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction and extension to multi-view reconstruction. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.",
"title": ""
},
{
"docid": "2f1ba4ba5cff9a6e614aa1a781bf1b13",
"text": "Face information processing relies on the quality of data resource. From the data modality point of view, a face database can be 2D or 3D, and static or dynamic. From the task point of view, the data can be used for research of computer based automatic face recognition, face expression recognition, face detection, or cognitive and psychological investigation. With the advancement of 3D imaging technologies, 3D dynamic facial sequences (called 4D data) have been used for face information analysis. In this paper, we focus on the modality of 3D dynamic data for the task of facial expression recognition. We present a newly created high-resolution 3D dynamic facial expression database, which is made available to the scientific research community. The database contains 606 3D facial expression sequences captured from 101 subjects of various ethnic backgrounds. The database has been validated through our facial expression recognition experiment using an HMM based 3D spatio-temporal facial descriptor. It is expected that such a database shall be used to facilitate the facial expression analysis from a static 3D space to a dynamic 3D space, with a goal of scrutinizing facial behavior at a higher level of detail in a real 3D spatio-temporal domain.",
"title": ""
}
] | [
{
"docid": "c4577ac95efb55a07e0748a10a9d4658",
"text": "This paper describes the design of a six-axis microelectromechanical systems (MEMS) force-torque sensor. A movable body is suspended by flexures that allow deflections and rotations along the x-, y-, and z-axes. The orientation of this movable body is sensed by seven capacitors. Transverse sensing is used for all capacitors, resulting in a high sensitivity. A batch fabrication process is described as capable of fabricating these multiaxis sensors with a high yield. The force sensor is experimentally investigated, and a multiaxis calibration method is described. Measurements show that the resolution is on the order of a micro-Newton and nano-Newtonmeter. This is the first six-axis MEMS force sensor that has been successfully developed.",
"title": ""
},
{
"docid": "f6574fbbdd53b2bc92af485d6c756df0",
"text": "A comparative analysis between Nigerian English (NE) and American English (AE) is presented in this article. The study is aimed at highlighting differences in the speech parameters, and how they influence speech processing and automatic speech recognition (ASR). The UILSpeech corpus of Nigerian-Accented English isolated word recordings, read speech utterances, and video recordings are used as a reference for Nigerian English. The corpus captures the linguistic diversity of Nigeria with data collected from native speakers of Hausa, Igbo, and Yoruba languages. The UILSpeech corpus is intended to provide a unique opportunity for application and expansion of speech processing techniques to a limited resource language dialect. The acoustic-phonetic differences between American English (AE) and Nigerian English (NE) are studied in terms of pronunciation variations, vowel locations in the formant space, mean fundamental frequency, and phone model distances in the acoustic space, as well as through visual speech analysis of the speakers’ articulators. A strong impact of the AE–NE acoustic mismatch on ASR is observed. A combination of model adaptation and extension of the AE lexicon for newly established NE pronunciation variants is shown to substantially improve performance of the AE-trained ASR system in the new NE task. This study is a part of the pioneering efforts towards incorporating speech technology in Nigerian English and is intended to provide a development basis for other low resource language dialects and languages.",
"title": ""
},
{
"docid": "27e60092f83e7572a5a7776113d8c97c",
"text": "Although cuckoo hashing has significant applications in both theoretical and practical settings, a relevant downside is that it requires lookups to multiple locations. In many settings, where lookups are expensive, cuckoo hashing becomes a less compelling alternative. One such standard setting is when memory is arranged in large pages, and a major cost is the number of page accesses. We propose the study of cuckoo hashing with pages, advocating approaches where each key has several possible locations, or cells, on a single page, and additional choices on a second backup page. We show experimentally that with k cell choices on one page and a single backup cell choice, one can achieve nearly the same loads as when each key has k+1 random cells to choose from, with most lookups requiring just one page access, even when keys are placed online using a simple algorithm. While our results are currently experimental, they suggest several interesting new open theoretical questions for cuckoo hashing with pages.",
"title": ""
},
{
"docid": "deed140862c62fa8be4a8a58ffc1d7dc",
"text": "Gender-affirmation surgery is often the final gender-confirming medical intervention sought by those patients suffering from gender dysphoria. In the male-to-female (MtF) transgendered patient, the creation of esthetic and functional external female genitalia with a functional vaginal channel is of the utmost importance. The aim of this review and meta-analysis is to evaluate the epidemiology, presentation, management, and outcomes of neovaginal complications in the MtF transgender reassignment surgery patients. PUBMED was searched in accordance with PRISMA guidelines for relevant articles (n = 125). Ineligible articles were excluded and articles meeting all inclusion criteria went on to review and analysis (n = 13). Ultimately, studies reported on 1,684 patients with an overall complication rate of 32.5% and a reoperation rate of 21.7% for non-esthetic reasons. The most common complication was stenosis of the neo-meatus (14.4%). Wound infection was associated with an increased risk of all tissue-healing complications. Use of sacrospinous ligament fixation (SSL) was associated with a significantly decreased risk of prolapse of the neovagina. Gender-affirmation surgery is important in the treatment of gender dysphoric patients, but there is a high complication rate in the reported literature. Variability in technique and complication reporting standards makes it difficult to assess the accurately the current state of MtF gender reassignment surgery. Further research and implementation of standards is necessary to improve patient outcomes. Clin. Anat. 31:191-199, 2018. © 2017 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "fb75e0c18c4852afac162b60554b67b1",
"text": "OBJECTIVE\nTo evaluate the feasibility and safety of home rehabilitation of the hand using a robotic glove, and, in addition, its effectiveness, in hemiplegic patients after stroke.\n\n\nMETHODS\nIn this non-randomized pilot study, 21 hemiplegic stroke patients (Ashworth spasticity index ≤ 3) were prescribed, after in-hospital rehabilitation, a 2-month home-program of intensive hand training using the Gloreha Lite glove that provides computer-controlled passive mobilization of the fingers. Feasibility was measured by: number of patients who completed the home-program, minutes of exercise and number of sessions/patient performed. Safety was assessed by: hand pain with a visual analog scale (VAS), Ashworth spasticity index for finger flexors, opponents of the thumb and wrist flexors, and hand edema (circumference of forearm, wrist and fingers), measured at start (T0) and end (T1) of rehabilitation. Hand motor function (Motricity Index, MI), fine manual dexterity (Nine Hole Peg Test, NHPT) and strength (Grip test) were also measured at T0 and T1.\n\n\nRESULTS\nPatients performed, over a mean period 56 (49-63) days, a total of 1699 (1353-2045) min/patient of exercise with Gloreha Lite, 5.1 (4.3-5.8) days/week. Seventeen patients (81%) completed the full program. The mean VAS score of hand pain, Ashworth spasticity index and hand edema did not change significantly at T1 compared to T0. The MI, NHPT and Grip test improved significantly (p = 0.0020, 0.0156 and 0.0024, respectively) compared to baseline.\n\n\nCONCLUSION\nGloreha Lite is feasible and safe for use in home rehabilitation. The efficacy data show a therapeutic effect which need to be confirmed by a randomized controlled study.",
"title": ""
},
{
"docid": "cebc36cd572740069ab22e8181c405c4",
"text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"title": ""
},
{
"docid": "c3f7a3a4e31a610e6ecc149cede3db30",
"text": "OBJECTIVES\nCross-language qualitative research occurs when a language barrier is present between researchers and participants. The language barrier is frequently mediated through the use of a translator or interpreter. The purpose of this analysis of cross-language qualitative research was threefold: (1) review the methods literature addressing cross-language research; (2) synthesize the methodological recommendations from the literature into a list of criteria that could evaluate how researchers methodologically managed translators and interpreters in their qualitative studies; (3) test these criteria on published cross-language qualitative studies.\n\n\nDATA SOURCES\nA group of 40 purposively selected cross-language qualitative studies found in nursing and health sciences journals.\n\n\nREVIEW METHODS\nThe synthesis of the cross-language methods literature produced 14 criteria to evaluate how qualitative researchers managed the language barrier between themselves and their study participants. To test the criteria, the researcher conducted a summative content analysis framed by discourse analysis techniques of the 40 cross-language studies.\n\n\nRESULTS\nThe evaluation showed that only 6 out of 40 studies met all the criteria recommended by the cross-language methods literature for the production of trustworthy results in cross-language qualitative studies. Multiple inconsistencies, reflecting disadvantageous methodological choices by cross-language researchers, appeared in the remaining 33 studies. To name a few, these included rendering the translator or interpreter as an invisible part of the research process, failure to pilot test interview questions in the participant's language, no description of translator or interpreter credentials, failure to acknowledge translation as a limitation of the study, and inappropriate methodological frameworks for cross-language research.\n\n\nCONCLUSIONS\nThe finding about researchers making the role of the translator or interpreter invisible during the research process supports studies completed by other authors examining this issue. The analysis demonstrated that the criteria produced by this study may provide useful guidelines for evaluating cross-language research and for novice cross-language researchers designing their first studies. Finally, the study also indicates that researchers attempting cross-language studies need to address the methodological issues surrounding language barriers between researchers and participants more systematically.",
"title": ""
},
{
"docid": "9738485d5c61ac43e3a1e101b063dfd5",
"text": "Sentiment analysis is one of the most popular natural language processing techniques. It aims to identify the sentiment polarity (positive, negative, neutral or mixed) within a given text. The proper lexicon knowledge is very important for the lexicon-based sentiment analysis methods since they hinge on using the polarity of the lexical item to determine a text's sentiment polarity. However, it is quite common that some lexical items appear positive in the text of one domain but appear negative in another. In this paper, we propose an innovative knowledge building algorithm to extract sentiment lexicon knowledge through computing their polarity value based on their polarity distribution in text dataset, such as in a set of domain specific reviews. The proposed algorithm was tested by a set of domain microblogs. The results demonstrate the effectiveness of the proposed method. The proposed lexicon knowledge extraction method can enhance the performance of knowledge based sentiment analysis.",
"title": ""
},
{
"docid": "a59f82d98f978701d6a4271db1674d2a",
"text": "Hyperspectral imagery typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image; however, when used in statistical pattern-classification tasks, the resulting high-dimensional feature spaces often tend to result in ill-conditioned formulations. Popular dimensionality-reduction techniques such as principal component analysis, linear discriminant analysis, and their variants typically assume a Gaussian distribution. The quadratic maximum-likelihood classifier commonly employed for hyperspectral analysis also assumes single-Gaussian class-conditional distributions. Departing from this single-Gaussian assumption, a classification paradigm designed to exploit the rich statistical structure of the data is proposed. The proposed framework employs local Fisher's discriminant analysis to reduce the dimensionality of the data while preserving its multimodal structure, while a subsequent Gaussian mixture model or support vector machine provides effective classification of the reduced-dimension multimodal data. Experimental results on several different multiple-class hyperspectral-classification tasks demonstrate that the proposed approach significantly outperforms several traditional alternatives.",
"title": ""
},
{
"docid": "df70cb4b1d37680cccb7d79bdea5d13b",
"text": "In this paper, we describe a system for automatic construction of user disease progression timelines from their posts in online support groups using minimal supervision. In recent years, several online support groups have been established which has led to a huge increase in the amount of patient-authored text available. Creating systems which can automatically extract important medical events and create disease progression timelines for users from such text can help in patient health monitoring as well as studying links between medical events and users’ participation in support groups. Prior work in this domain has used manually constructed keyword sets to detect medical events. In this work, our aim is to perform medical event detection using minimal supervision in order to develop a more general timeline construction system. Our system achieves an accuracy of 55.17%, which is 92% of the performance achieved by a supervised baseline system.",
"title": ""
},
{
"docid": "b51021e995fc4be50028a0a152db7e7a",
"text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "3ed5ec863971e04523a7ede434eaa80d",
"text": "This article reports on the design, implementation, and usage of the CourseMarker (formerly known as CourseMaster) courseware Computer Based Assessment (CBA) system at the University of Nottingham. Students use CourseMarker to solve (programming) exercises and to submit their solutions. CourseMarker returns immediate results and feedback to the students. Educators author a variety of exercises that benefit the students while offering practical benefits. To date, both educators and students have been hampered by CBA software that has been constructed to assess text-based or multiple-choice answers only. Although there exist a few CBA systems with some capability to automatically assess programming coursework, none assess Java programs and none are as flexible, architecture-neutral, robust, or secure as the CourseMarker CBA system.",
"title": ""
},
{
"docid": "0b231777fedf27659b4558aaabb872be",
"text": "Recognizing multiple mixed group activities from one still image is not a hard problem for humans but remains highly challenging for computer recognition systems. When modelling interactions among multiple units (i.e., more than two groups or persons), the existing approaches tend to divide them into interactions between pairwise units. However, no mathematical evidence supports this transformation. Therefore, these approaches’ performance is limited on images containing multiple activities. In this paper, we propose a generative model to provide a more reasonable interpretation for the mixed group activities contained in one image. We design a four level structure and convert the original intra-level interactions into inter-level interactions, in order to implement both interactions among multiple groups and interactions among multiple persons within a group. The proposed four-level structure makes our model more robust against the occlusion and overlap of the visible poses in images. Experimental results demonstrate that our model makes good interpretations for mixed group activities and outperforms the state-of-the-art methods on the Collective Activity Classification dataset.",
"title": ""
},
{
"docid": "eec5034991f82e0d809aba5e3eb94fe2",
"text": "This paper considers John Dewey’s dual reformist-preservationist agenda for education in the context of current debates about the role of experience in management learning. The paper argues for preserving experience-based approaches to management learning by revising the concept of experience to more clearly account for the relationship between personal and social (i.e. , tacit/explicit) knowledge. By reviewing, comparing and extending critiques of Kolb’s experiential learning theory and reconceptualizing the learning process based on post-structural analysis of psychoanalyst Jacque Lacan, the paper defines experience within the context of language and social action. This perspective is contrasted to action, cognition, critical reflection and other experience-based approaches to management learning. Implications for management theory, pedagogy and practice suggest greater emphasis on language and conversation in the learning process. Future directions for research are explored.",
"title": ""
},
{
"docid": "3900864885cf79e33683ec5c595235ad",
"text": "Digital mammogram has become the most effective technique for early breast cancer detection modality. Digital mammogram takes an electronic image of the breast and stores it directly in a computer. High quality mammogram images are high resolution and large size images. Processing these images require high computational capabilities. The transmission of these images over the net is sometimes critical especially if the diagnosis of remote radiologists is required. The aim of this study is to develop an automated system for assisting the analysis of digital mammograms. Computer image processing techniques will be applied to enhance images and this is followed by segmentation of the region of interest (ROI). Subsequently, the textural features will be extracted from the ROI. The texture features will be used to classify the ROIs as either masses or non-masses.",
"title": ""
},
{
"docid": "d0eb7de87f3d6ed3fd6c34a1f0ce47a1",
"text": "STRANGER is an automata-based string analysis tool for finding and eliminating string-related security vulnerabilities in P H applications. STRANGER uses symbolic forward and backward reachability analyses t o compute the possible values that the string expressions can take during progr am execution. STRANGER can automatically (1) prove that an application is free from specified attacks or (2) generate vulnerability signatures that c racterize all malicious inputs that can be used to generate attacks.",
"title": ""
},
{
"docid": "eacf295c0cbd52599a1567c6d4193007",
"text": "Search Ranking and Recommendations are fundamental problems of crucial interest to major Internet companies, including web search engines, content publishing websites and marketplaces. However, despite sharing some common characteristics a one-size-fits-all solution does not exist in this space. Given a large difference in content that needs to be ranked, personalized and recommended, each marketplace has a somewhat unique challenge. Correspondingly, at Airbnb, a short-term rental marketplace, search and recommendation problems are quite unique, being a two-sided marketplace in which one needs to optimize for host and guest preferences, in a world where a user rarely consumes the same item twice and one listing can accept only one guest for a certain set of dates. In this paper we describe Listing and User Embedding techniques we developed and deployed for purposes of Real-time Personalization in Search Ranking and Similar Listing Recommendations, two channels that drive 99% of conversions. The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations. We conducted rigorous offline testing of the embedding models, followed by successful online tests before fully deploying them into production.",
"title": ""
},
{
"docid": "2923652ff988572a40d682e2a459707a",
"text": "Clustering analysis is a descriptive task that seeks to identify homogeneous groups of objects based on the values of their attributes. This paper proposes a new algorithm for K-medoids clustering which runs like the K-means algorithm and tests several methods for selecting initial medoids. The proposed algorithm calculates the distance matrix once and uses it for finding new medoids at every iterative step. We evaluate the proposed algorithm using real and artificial data and compare with the results of other algorithms. The proposed algorithm takes the reduced time in computation with comparable performance as compared to the Partitioning Around Medoids.",
"title": ""
},
{
"docid": "1464f9d7a60a59bfdd6399ea6cd9fd99",
"text": "Table of",
"title": ""
},
{
"docid": "34f8765ca28666cfeb94e324882a71d6",
"text": "We are living in the era of the fourth industrial revolution, namely Industry 4.0. This paper presents the main aspects related to Industry 4.0, the technologies that will enable this revolution, and the main application domains that will be affected by it. The effects that the introduction of Internet of Things (IoT), Cyber-Physical Systems (CPS), crowdsensing, crowdsourcing, cloud computing and big data will have on industrial processes will be discussed. The main objectives will be represented by improvements in: production efficiency, quality and cost-effectiveness; workplace health and safety, as well as quality of working conditions; products’ quality and availability, according to mass customisation requirements. The paper will further discuss the common denominator of these enhancements, i.e., data collection and analysis. As data and information will be crucial for Industry 4.0, crowdsensing and crowdsourcing will introduce new advantages and challenges, which will make most of the industrial processes easier with respect to traditional technologies.",
"title": ""
}
] | scidocsrr |
afb4607b5e8407b9632844376d5681f5 | Turbo and Turbo-Like Codes: Principles and Applications in Telecommunications | [
{
"docid": "48fde3a2cd8781ce675ce116ed8ee861",
"text": "DVB-S2 is the second-generation specification for satellite broad-band applications, developed by the Digital Video Broadcasting (DVB) Project in 2003. The system is structured as a toolkit to allow the implementation of the following satellite applications: TV and sound broadcasting, interactivity (i.e., Internet access), and professional services, such as TV contribution links and digital satellite news gathering. It has been specified around three concepts: best transmission performance approaching the Shannon limit, total flexibility, and reasonable receiver complexity. Channel coding and modulation are based on more recent developments by the scientific community: low density parity check codes are adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations for the system to work properly on the nonlinear satellite channel. The framing structure allows for maximum flexibility in a versatile system and also synchronization in worst case configurations (low signal-to-noise ratios). Adaptive coding and modulation, when used in one-to-one links, then allows optimization of the transmission parameters for each individual user,dependant on path conditions. Backward-compatible modes are also available,allowing existing DVB-S integrated receivers-decoders to continue working during the transitional period. The paper provides a tutorial overview of the DVB-S2 system, describing its main features and performance in various scenarios and applications.",
"title": ""
}
] | [
{
"docid": "901174e2dd911afada2e8ccf245d25f3",
"text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.",
"title": ""
},
{
"docid": "d7310e830f85541aa1d4b94606c1be0c",
"text": "We present a practical framework to automatically detect shadows in real world scenes from a single photograph. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The 7-layer network architecture of each ConvNet consists of alternating convolution and sub-sampling layers. The proposed framework learns features at the super-pixel level and along the object boundaries. In both cases, features are extracted using a context aware window centered at interest points. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow contours. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.",
"title": ""
},
{
"docid": "0387b6a593502a9c74ee62cd8eeec886",
"text": "Recently, very deep networks, with as many as hundreds of layers, have shown great success in image classification tasks. One key component that has enabled such deep models is the use of “skip connections”, including either residual or highway connections, to alleviate the vanishing and exploding gradient problems. While these connections have been explored for speech, they have mainly been explored for feed-forward networks. Since recurrent structures, such as LSTMs, have produced state-of-the-art results on many of our Voice Search tasks, the goal of this work is to thoroughly investigate different approaches to adding depth to recurrent structures. Specifically, we experiment with novel Highway-LSTM models with bottlenecks skip connections and show that a 10 layer model can outperform a state-of-the-art 5 layer LSTM model with the same number of parameters by 2% relative WER. In addition, we experiment with Recurrent Highway layers and find these to be on par with Highway-LSTM models, when given sufficient depth.",
"title": ""
},
{
"docid": "d05a179a28cab9cb47be0638ae7b525c",
"text": "Ionizing radiation effects on CMOS image sensors (CIS) manufactured using a 0.18 mum imaging technology are presented through the behavior analysis of elementary structures, such as field oxide FET, gated diodes, photodiodes and MOSFETs. Oxide characterizations appear necessary to understand ionizing dose effects on devices and then on image sensors. The main degradations observed are photodiode dark current increases (caused by a generation current enhancement), minimum size NMOSFET off-state current rises and minimum size PMOSFET radiation induced narrow channel effects. All these effects are attributed to the shallow trench isolation degradation which appears much more sensitive to ionizing radiation than inter layer dielectrics. Unusual post annealing effects are reported in these thick oxides. Finally, the consequences on sensor design are discussed thanks to an irradiated pixel array and a comparison with previous work is discussed.",
"title": ""
},
{
"docid": "3d846789f15f5a70cd36b45f00c6861a",
"text": "Web-based businesses succeed by cultivating consumers' trust, starting with their beliefs, attitudes, intentions, and willingness to perform transactions at Web sites and with the organizations behind them.",
"title": ""
},
{
"docid": "558abc8028d1d5b6956d2cf046efb983",
"text": "A key question concerns the extent to which sexual differentiation of human behavior is influenced by sex hormones present during sensitive periods of development (organizational effects), as occurs in other mammalian species. The most important sensitive period has been considered to be prenatal, but there is increasing attention to puberty as another organizational period, with the possibility of decreasing sensitivity to sex hormones across the pubertal transition. In this paper, we review evidence that sex hormones present during the prenatal and pubertal periods produce permanent changes to behavior. There is good evidence that exposure to high levels of androgens during prenatal development results in masculinization of activity and occupational interests, sexual orientation, and some spatial abilities; prenatal androgens have a smaller effect on gender identity, and there is insufficient information about androgen effects on sex-linked behavior problems. There is little good evidence regarding long-lasting behavioral effects of pubertal hormones, but there is some suggestion that they influence gender identity and perhaps some sex-linked forms of psychopathology, and there are many opportunities to study this issue.",
"title": ""
},
{
"docid": "0d30cfe8755f146ded936aab55cb80d3",
"text": "In this study, we investigated a pattern-recognition technique based on an artificial neural network (ANN), which is called a massive training artificial neural network (MTANN), for reduction of false positives in computerized detection of lung nodules in low-dose computed tomography (CT) images. The MTANN consists of a modified multilayer ANN, which is capable of operating on image data directly. The MTANN is trained by use of a large number of subregions extracted from input images together with the teacher images containing the distribution for the \"likelihood of being a nodule.\" The output image is obtained by scanning an input image with the MTANN. The distinction between a nodule and a non-nodule is made by use of a score which is defined from the output image of the trained MTANN. In order to eliminate various types of non-nodules, we extended the capability of a single MTANN, and developed a multiple MTANN (Multi-MTANN). The Multi-MTANN consists of plural MTANNs that are arranged in parallel. Each MTANN is trained by using the same nodules, but with a different type of non-nodule. Each MTANN acts as an expert for a specific type of non-nodule, e.g., five different MTANNs were trained to distinguish nodules from various-sized vessels; four other MTANNs were applied to eliminate some other opacities. The outputs of the MTANNs were combined by using the logical AND operation such that each of the trained MTANNs eliminated none of the nodules, but removed the specific type of non-nodule with which the MTANN was trained, and thus removed various types of non-nodules. The Multi-MTANN consisting of nine MTANNs was trained with 10 typical nodules and 10 non-nodules representing each of nine different non-nodule types (90 training non-nodules overall) in a training set. The trained Multi-MTANN was applied to the reduction of false positives reported by our current computerized scheme for lung nodule detection based on a database of 63 low-dose CT scans (1765 sections), which contained 71 confirmed nodules including 66 biopsy-confirmed primary cancers, from a lung cancer screening program. The Multi-MTANN was applied to 58 true positives (nodules from 54 patients) and 1726 false positives (non-nodules) reported by our current scheme in a validation test; these were different from the training set. The results indicated that 83% (1424/1726) of non-nodules were removed with a reduction of one true positive (nodule), i.e., a classification sensitivity of 98.3% (57 of 58 nodules). By using the Multi-MTANN, the false-positive rate of our current scheme was improved from 0.98 to 0.18 false positives per section (from 27.4 to 4.8 per patient) at an overall sensitivity of 80.3% (57/71).",
"title": ""
},
{
"docid": "19d6ad18011815602854685211847c52",
"text": "This paper presents a method for learning an And-Or model to represent context and occlusion for car detection and viewpoint estimation. The learned And-Or model represents car-to-car context and occlusion configurations at three levels: (i) spatially-aligned cars, (ii) single car under different occlusion configurations, and (iii) a small number of parts. The And-Or model embeds a grammar for representing large structural and appearance variations in a reconfigurable hierarchy. The learning process consists of two stages in a weakly supervised way (i.e., only bounding boxes of single cars are annotated). First, the structure of the And-Or model is learned with three components: (a) mining multi-car contextual patterns based on layouts of annotated single car bounding boxes, (b) mining occlusion configurations between single cars, and (c) learning different combinations of part visibility based on CAD simulations. The And-Or model is organized in a directed and acyclic graph which can be inferred by Dynamic Programming. Second, the model parameters (for appearance, deformation and bias) are jointly trained using Weak-Label Structural SVM. In experiments, we test our model on four car detection datasets-the KITTI dataset [1] , the PASCAL VOC2007 car dataset [2] , and two self-collected car datasets, namely the Street-Parking car dataset and the Parking-Lot car dataset, and three datasets for car viewpoint estimation-the PASCAL VOC2006 car dataset [2] , the 3D car dataset [3] , and the PASCAL3D+ car dataset [4] . Compared with state-of-the-art variants of deformable part-based models and other methods, our model achieves significant improvement consistently on the four detection datasets, and comparable performance on car viewpoint estimation.",
"title": ""
},
{
"docid": "2d5f6f0bd7ff91525fb130fd785ce281",
"text": "Security flaws are open doors to attack embedded systems and must be carefully assessed in order to determine threats to safety and security. Subsequently securing a system, that is, integrating security mechanisms into the system's architecture can itself impact the system's safety, for instance deadlines could be missed due to an increase in computations and communications latencies. SysML-Sec addresses these issues with a model-driven approach that promotes the collaboration between system designers and security experts at all design and development stages, e.g., requirements, attacks, partitioning, design, and validation. A central point of SysML-Sec is its partitioning stage during which safety-related and security-related functions are explored jointly and iteratively with regards to requirements and attacks. Once partitioned, the system is designed in terms of system's functions and security mechanisms, and formally verified from both the safety and the security perspectives. Our paper illustrates the whole methodology with the evaluation of a security mechanism added to an existing automotive system.",
"title": ""
},
{
"docid": "f785636331f737d8dc14b6958277553f",
"text": "This paper focuses on subword-based Neural Machine Translation (NMT). We hypothesize that in the NMT model, the appropriate subword units for the following three modules (layers) can differ: (1) the encoder embedding layer, (2) the decoder embedding layer, and (3) the decoder output layer. We find the subword based on Sennrich et al. (2016) has a feature that a large vocabulary is a superset of a small vocabulary and modify the NMT model enables the incorporation of several different subword units in a single embedding layer. We refer these small subword features as hierarchical subword features. To empirically investigate our assumption, we compare the performance of several different subword units and hierarchical subword features for both the encoder and decoder embedding layers. We confirmed that incorporating hierarchical subword features in the encoder consistently improves BLEU scores on the IWSLT evaluation datasets. Title and Abstract in Japanese 階層的部分単語特徴を用いたニューラル機械翻訳 本稿では、部分単語 (subword) を用いたニューラル機械翻訳 (Neural Machine Translation, NMT)に着目する。NMTモデルでは、エンコーダの埋め込み層、デコーダの埋め込み層お よびデコーダの出力層の 3箇所で部分単語が用いられるが、それぞれの層で適切な部分単 語単位は異なるという仮説を立てた。我々は、Sennrich et al. (2016)に基づく部分単語は、 大きな語彙集合が小さい語彙集合を必ず包含するという特徴を利用して、複数の異なる部 分単語列を同時に一つの埋め込み層として扱えるよう NMTモデルを改良する。以降、こ の小さな語彙集合特徴を階層的部分単語特徴と呼ぶ。本仮説を検証するために、様々な部 分単語単位や階層的部分単語特徴をエンコーダ・デコーダの埋め込み層に適用して、その 精度の変化を確認する。IWSLT評価セットを用いた実験により、エンコーダ側で階層的な 部分単語を用いたモデルは BLEUスコアが一貫して向上することが確認できた。",
"title": ""
},
{
"docid": "7056b8e792a2bd1535cf020b2aeab2c7",
"text": "The authors propose a theoretical model linking achievement goals and achievement emotions to academic performance. This model was tested in a prospective study with undergraduates (N 213), using exam-specific assessments of both goals and emotions as predictors of exam performance in an introductory-level psychology course. The findings were consistent with the authors’ hypotheses and supported all aspects of the proposed model. In multiple regression analysis, achievement goals (mastery, performance approach, and performance avoidance) were shown to predict discrete achievement emotions (enjoyment, boredom, anger, hope, pride, anxiety, hopelessness, and shame), achievement emotions were shown to predict performance attainment, and 7 of the 8 focal emotions were documented as mediators of the relations between achievement goals and performance attainment. All of these findings were shown to be robust when controlling for gender, social desirability, positive and negative trait affectivity, and scholastic ability. The results are discussed with regard to the underdeveloped literature on discrete achievement emotions and the need to integrate conceptual and applied work on achievement goals and achievement emotions.",
"title": ""
},
{
"docid": "3e83d63920d7d8650a2eeaa2e68ec640",
"text": "Antibiotic resistance consists of a dynamic web. In this review, we describe the path by which different antibiotic residues and antibiotic resistance genes disseminate among relevant reservoirs (human, animal, and environmental settings), evaluating how these events contribute to the current scenario of antibiotic resistance. The relationship between the spread of resistance and the contribution of different genetic elements and events is revisited, exploring examples of the processes by which successful mobile resistance genes spread across different niches. The importance of classic and next generation molecular approaches, as well as action plans and policies which might aid in the fight against antibiotic resistance, are also reviewed.",
"title": ""
},
{
"docid": "7e941f9534357fca740b97a99e86f384",
"text": "The head-direction (HD) cells found in the limbic system in freely mov ing rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be controlled accurately by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local-view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self-motion information.",
"title": ""
},
{
"docid": "7f2fcc4b4af761292d3f77ffd1a2f7c3",
"text": "An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.",
"title": ""
},
{
"docid": "c7f0856c282d1039e44ba6ef50948d32",
"text": "This paper presents the analysis and operation of a three-phase pulsewidth modulation rectifier system formed by the star-connection of three single-phase boost rectifier modules (Y-rectifier) without a mains neutral point connection. The current forming operation of the Y-rectifier is analyzed and it is shown that the phase current has the same high quality and low ripple as the Vienna rectifier. The isolated star point of Y-rectifier results in a mutual coupling of the individual phase module outputs and has to be considered for control of the module dc link voltages. An analytical expression for the coupling coefficients of the Y-rectifier phase modules is derived. Based on this expression, a control concept with reduced calculation effort is designed and it provides symmetric loading of the phase modules and solves the balancing problem of the dc link voltages. The analysis also provides insight that enables the derivation of a control concept for two phase operation, such as in the case of a mains phase failure. The theoretical and simulated results are proved by experimental analysis on a fully digitally controlled, 5.4-kW prototype.",
"title": ""
},
{
"docid": "dcf231b887d7caeec341850507561197",
"text": "Convolutional neural networks (CNNs) have attracted increasing attention in the remote sensing community. Most CNNs only take the last fully-connected layers as features for the classification of remotely sensed images, discarding the other convolutional layer features which may also be helpful for classification purposes. In this paper, we propose a new adaptive deep pyramid matching (ADPM) model that takes advantage of the features from all of the convolutional layers for remote sensing image classification. To this end, the optimal fusing weights for different convolutional layers are learned from the data itself. In remotely sensed scenes, the objects of interest exhibit different scales in distinct scenes, and even a single scene may contain objects with different sizes. To address this issue, we select the CNN with spatial pyramid pooling (SPP-net) as the basic deep network, and further construct a multi-scale ADPM model to learn complementary information from multi-scale images. Our experiments have been conducted using two widely used remote sensing image databases, and the results show that the proposed method significantly improves the performance when compared to other state-of-the-art methods. Keywords—Convolutional neural network (CNN), adaptive deep pyramid matching (ADPM), convolutional features, multi-scale ensemble, remote-sensing scene classification.",
"title": ""
},
{
"docid": "5b7f20103c99a93c46efe4575f012e7d",
"text": "The availability of several Advanced Driver Assistance Systems has put a correspondingly large number of inexpensive, yet capable sensors on production vehicles. By combining this reality with expertise from the DARPA Grand and Urban Challenges in building autonomous driving platforms, we were able to design and develop an Autonomous Valet Parking (AVP) system on a 2006 Volkwagen Passat Wagon TDI using automotive grade sensors. AVP provides the driver with both convenience and safety benefits - the driver can leave the vehicle at the entrance of a parking garage, allowing the vehicle to navigate the structure, find a spot, and park itself. By leveraging existing software modules from the DARPA Urban Challenge, our efforts focused on developing a parking spot detector, a localization system that did not use GPS, and a back-in parking planner. This paper focuses on describing the design and development of the last two modules.",
"title": ""
},
{
"docid": "ba6865dc3c93ac52c9f1050f159b9e1a",
"text": "A review of various properties of ceramic-reinforced aluminium matrix composites is presented in this paper. The properties discussed include microstructural, optical, physical and mechanical behaviour of ceramic-reinforced aluminium matrix composites and effects of reinforcement fraction, particle size, heat treatment and extrusion process on these properties. The results obtained by many researchers indicated the uniform distribution of reinforced particles with localized agglomeration at some places, when the metal matrix composite was processed through stir casting method. The density, hardness, compressive strength and toughness increased with increasing reinforcement fraction; however, these properties may reduce in the presence of porosity in the composite material. The particle size of reinforcements affected the hardness adversely. Tensile strength and flexural strength were observed to be increased up to a certain reinforcement fraction in the composites, beyond which these were reduced. The mechanical properties of the composite materials were improved by either thermal treatment or extrusion process. Initiation and growth of fine microcracks leading to macroscopic failure, ductile failure of the aluminium matrix, combination of particle fracture and particle pull-out, overload failure under tension and brittle fracture were the failure mode and mechanisms, as observed by previous researchers, during fractography analysis of tensile specimens of ceramic-reinforced aluminium matrix composites.",
"title": ""
},
{
"docid": "d74874cf15642c87c7de51e54275f9be",
"text": "We used a three layer Convolutional Neural Network (CNN) to make move predictions in chess. The task was defined as a two-part classification problem: a piece-selector CNN is trained to score which white pieces should be made to move, and move-selector CNNs for each piece produce scores for where it should be moved. This approach reduced the intractable class space in chess by a square root. The networks were trained using 20,000 games consisting of 245,000 moves made by players with an ELO rating higher than 2000 from the Free Internet Chess Server. The piece-selector network was trained on all of these moves, and the move-selector networks trained on all moves made by the respective piece. Black moves were trained on by using a data augmentation to frame it as a move made by the",
"title": ""
},
{
"docid": "9c6601360694b48c137ec2a974635106",
"text": "This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier units, to exact features. Instead of MLP, we employ maxout MLP to learn a variety of piecewise linear activation functions and to mediate the problem of vanishing gradients that can occur when using rectifier units. Moreover, batch normalization is applied to reduce the saturation of maxout units by pre-conditioning the model and dropout is applied to prevent overfitting. Finally, average pooling is used in all pooling layers to regularize maxout MLP in order to facilitate information abstraction in every receptive field while tolerating the change of object position. Because average pooling preserves all features in the local patch, the proposed MIN model can enforce the suppression of irrelevant information during training. Our experiments demonstrated the state-of-the-art classification performance when the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and comparable performance for SVHN dataset.",
"title": ""
}
] | scidocsrr |
ccb69c95b57ab3b3a726e8ee0c27059c | Improving ChangeDistiller Improving Abstract Syntax Tree based Source Code Change Detection | [
{
"docid": "cd8eeaeb81423fcb1c383f2b60e928df",
"text": "Detecting and representing changes to data is important for active databases, data warehousing, view maintenance, and version and configuration management. Most previous work in change management has dealt with flat-file and relational data; we focus on hierarchically structured data. Since in many cases changes must be computed from old and new versions of the data, we define the hierarchical change detection problem as the problem of finding a \"minimum-cost edit script\" that transforms one data tree to another, and we present efficient algorithms for computing such an edit script. Our algorithms make use of some key domain characteristics to achieve substantially better performance than previous, general-purpose algorithms. We study the performance of our algorithms both analytically and empirically, and we describe the application of our techniques to hierarchically structured documents.",
"title": ""
}
] | [
{
"docid": "1945d4663a49a5e1249e43dc7f64d15b",
"text": "The current generation of adolescents grows up in a media-saturated world. However, it is unclear how media influences the maturational trajectories of brain regions involved in social interactions. Here we review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use. We argue that adolescents are highly sensitive to acceptance and rejection through social media, and that their heightened emotional sensitivity and protracted development of reflective processing and cognitive control may make them specifically reactive to emotion-arousing media. This review illustrates how neuroscience may help understand the mutual influence of media and peers on adolescents’ well-being and opinion formation. The current generation of adolescents grows up in a media-saturated world. Here, Crone and Konijn review the neural development in adolescence and show how neuroscience can provide a deeper understanding of developmental sensitivities related to adolescents’ media use.",
"title": ""
},
{
"docid": "963d6b615ffd025723c82c1aabdbb9c6",
"text": "A single high-directivity microstrip patch antenna (MPA) having a rectangular profile, which can substitute a linear array is proposed. It is designed by using genetic algorithms with the advantage of not requiring a feeding network. The patch fits inside an area of 2.54 x 0.25, resulting in a broadside pattern with a directivity of 12 dBi and a fractional impedance bandwidth of 4 %. The antenna is fabricated and the measurements are in good agreement with the simulated results. The genetic MPA provides a similar directivity as linear arrays using a corporate or series feeding, with the advantage that the genetic MPA results in more bandwidth.",
"title": ""
},
{
"docid": "909405e3c06f22273107cb70a40d88c6",
"text": "This paper reports a 6-bit 220-MS/s time-interleaving successive approximation register analog-to-digital converter (SAR ADC) for low-power low-cost CMOS integrated systems. The major concept of the design is based on the proposed set-and-down capacitor switching method in the DAC capacitor array. Compared to the conventional switching method, the average switching energy is reduced about 81%. At 220-MS/s sampling rate, the measured SNDR and SFDR are 32.62 dB and 48.96 dB respectively. The resultant ENOB is 5.13 bits. The total power consumption is 6.8 mW. Fabricated in TSMC 0.18-µm 1P5M Digital CMOS technology, the ADC only occupies 0.032 mm2 active area.",
"title": ""
},
{
"docid": "f8947be81285e037eef69c5d2fcb94fb",
"text": "To build a flexible and an adaptable architecture network supporting variety of services and their respective requirements, 5G NORMA introduced a network of functions based architecture breaking the major design principles followed in the current network of entities based architecture. This revolution exploits the advantages of the new technologies like Software-Defined Networking (SDN) and Network Function Virtualization (NFV) in conjunction with the network slicing and multitenancy concepts. In this paper we focus on the concept of Software Defined for Mobile Network Control (SDM-C) network: its definition, its role in controlling the intra network slices resources, its specificity to be QoE aware thanks to the QoE/QoS monitoring and modeling component and its complementarity with the orchestration component called SDM-O. To operate multiple network slices on the same infrastructure efficiently through controlling resources and network functions sharing among instantiated network slices, a common entity named SDM-X is introduced. The proposed design brings a set of new capabilities to make the network energy efficient, a feature that is discussed through some use cases.",
"title": ""
},
{
"docid": "49791684a7a455acc9daa2ca69811e74",
"text": "This paper analyzes the basic method of digital video image processing, studies the vehicle license plate recognition system based on image processing in intelligent transport system, presents a character recognition approach based on neural network perceptron to solve the vehicle license plate recognition in real-time traffic flow. Experimental results show that the approach can achieve better positioning effect, has a certain robustness and timeliness.",
"title": ""
},
{
"docid": "704f4681b724a0e4c7c10fd129f3378b",
"text": "We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical NP-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing. R esum e Nous pr esentons un sch ema totalement polynomial d'approximation pour la mise en boite de rectangles dans une boite de largeur x ee, avec hauteur mi-nimale, qui est un probleme NP-dur classique, de coupes par guillotine. L'al-gorithme donne un placement des rectangles, dont la hauteur est au plus egale a (1 +) (hauteur optimale) et a un temps d'execution polynomial en n et en 1==. Il utilise une reduction au probleme de la mise en boite fractionaire. Abstract We present an asymptotic fully polynomial approximation scheme for strip-packing, or packing rectangles into a rectangle of xed width and minimum height, a classical N P-hard cutting-stock problem. The algorithm nds a packing of n rectangles whose total height is within a factor of (1 +) of optimal (up to an additive term), and has running time polynomial both in n and in 1==. It is based on a reduction to fractional bin-packing.",
"title": ""
},
{
"docid": "6470c8a921a9095adb96afccaa0bf97b",
"text": "Complex tasks with a visually rich component, like diagnosing seizures based on patient video cases, not only require the acquisition of conceptual but also of perceptual skills. Medical education has found that besides biomedical knowledge (knowledge of scientific facts) clinical knowledge (actual experience with patients) is crucial. One important aspect of clinical knowledge that medical education has hardly focused on, yet, are perceptual skills, like visually searching, detecting, and interpreting relevant features. Research on instructional design has shown that in a visually rich, but simple classification task perceptual skills could be conveyed by means of showing the eye movements of a didactically behaving expert. The current study applied this method to medical education in a complex task. This was done by example video cases, which were verbally explained by an expert. In addition the experimental groups saw a display of the expert’s eye movements recorded, while he performed the task. Results show that blurring non-attended areas of the expert enhances diagnostic performance of epileptic seizures by medical students in contrast to displaying attended areas as a circle and to a control group without attention guidance. These findings show that attention guidance fosters learning of perceptual aspects of clinical knowledge, if implemented in a spotlight manner.",
"title": ""
},
{
"docid": "3d7e7ec8d4d0c2b3167805b2c3ad6e94",
"text": "The Electric Vehicle Routing Problem with Time Windows (EVRPTW) is an extension to the well-known Vehicle Routing Problem with Time Windows (VRPTW) where the fleet consists of electric vehicles (EVs). Since EVs have limited driving range due to their battery capacities they may need to visit recharging stations while servicing the customers along their route. The recharging may take place at any battery level and after the recharging the battery is assumed to be full. In this paper, we relax the full recharge restriction and allow partial recharging (EVRPTW-PR) which is more practical in the real world due to shorter recharging duration. We formulate this problem as 0-1 mixed integer linear program and develop an Adaptive Large Neighborhood Search (ALNS) algorithm to solve it efficiently. We apply several removal and insertion mechanisms by selecting them dynamically and adaptively based on their past performances, including new mechanisms specifically designed for EVRPTW and EVRPTWPR. We test the performance of ALNS by using benchmark instances from the recent literature. The computational results show that the proposed method is effective in finding high quality solutions and the partial recharging option may significantly improve the routing decisions.",
"title": ""
},
{
"docid": "4cd868f43a4a468791d014515800fb04",
"text": "Rescue operations play an important role in disaster management and in most of the cases rescue operation are challenged by the conditions where human intervention is highly unlikely allowed, in such cases a device which can replace human limitations with advanced technology in robotics and humanoids which can track or follow a route to find the targets. In this paper we use Cellular mobile communication technology as communication channel between the transmitter and the receiving robot device. A phone is established between the transmitter mobile phone and the one on robot with a DTMF decoder which receives the motion control commands from the keypad via mobile phone. The implemented system is built around on the ARM7 LPC2148. It processes the information came from sensors and DTMF module and send to the motor driver bridge to control the motors to change direction and position of the robot. This system is designed to use best in the conditions of accidents or incidents happened in coal mining, fire accidents, bore well incidents and so on.",
"title": ""
},
{
"docid": "56f18b39a740dd65fc2907cdef90ac99",
"text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.",
"title": ""
},
{
"docid": "1277b7b45f5a54eec80eb8ab47ee3fbb",
"text": "Latent variable models, and probabilistic graphical models more generally, provide a declarative language for specifying prior knowledge and structural relationships in complex datasets. They have a long and rich history in natural language processing, having contributed to fundamental advances such as statistical alignment for translation (Brown et al., 1993), topic modeling (Blei et al., 2003), unsupervised part-of-speech tagging (Brown et al., 1992), and grammar induction (Klein and Manning, 2004), among others. Deep learning, broadly construed, is a toolbox for learning rich representations (i.e., features) of data through numerical optimization. Deep learning is the current dominant paradigm in natural language processing, and some of the major successes include language modeling (Bengio et al., 2003; Mikolov et al., 2010; Zaremba et al., 2014), machine translation (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Vaswani et al., 2017), and natural language understanding tasks such as question answering and natural language inference.",
"title": ""
},
{
"docid": "5c47c2de88f662f8c6e735b5bb9cd37a",
"text": "Neural Machine Translation (NMT) models are often trained on heterogeneous mixtures of domains, from news to parliamentary proceedings, each with unique distributions and language. In this work we show that training NMT systems on naively mixed data can degrade performance versus models fit to each constituent domain. We demonstrate that this problem can be circumvented, and propose three models that do so by jointly learning domain discrimination and translation. We demonstrate the efficacy of these techniques by merging pairs of domains in three languages: Chinese, French, and Japanese. After training on composite data, each approach outperforms its domain-specific counterparts, with a model based on a discriminator network doing so most reliably. We obtain consistent performance improvements and an average increase of 1.1 BLEU.",
"title": ""
},
{
"docid": "448dc3c1c5207e606f1bd3b386f8bbde",
"text": "Variational autoencoders (VAE) are a powerful and widely-used class of models to learn complex data distributions in an unsupervised fashion. One important limitation of VAEs is the prior assumption that latent sample representations are independent and identically distributed. However, for many important datasets, such as time-series of images, this assumption is too strong: accounting for covariances between samples, such as those in time, can yield to a more appropriate model specification and improve performance in downstream tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior Variational Autoencoder (GPPVAE), to specifically address this issue. The GPPVAE aims to combine the power of VAEs with the ability to model correlations afforded by GP priors. To achieve efficient inference in this new class of models, we leverage structure in the covariance matrix, and introduce a new stochastic backpropagation strategy that allows for computing stochastic gradients in a distributed and low-memory fashion. We show that our method outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two image data applications.",
"title": ""
},
{
"docid": "b9b5c187df7a83392244d51b2b4f30a7",
"text": "OBJECTIVE\nTo compare the prevalence of anxiety, depression, and stress in medical students from all semesters of a Brazilian medical school and assess their respective associated factors.\n\n\nMETHOD\nA cross-sectional study of students from the twelve semesters of a Brazilian medical school was carried out. Students filled out a questionnaire including sociodemographics, religiosity (DUREL - Duke Religion Index), and mental health (DASS-21 - Depression, Anxiety, and Stress Scale). The students were compared for mental health variables (Chi-squared/ANOVA). Linear regression models were employed to assess factors associated with DASS-21 scores.\n\n\nRESULTS\n761 (75.4%) students answered the questionnaire; 34.6% reported depressive symptomatology, 37.2% showed anxiety symptoms, and 47.1% stress symptoms. Significant differences were found for: anxiety - ANOVA: [F = 2.536, p=0.004] between first and tenth (p=0.048) and first and eleventh (p=0.025) semesters; depression - ANOVA: [F = 2.410, p=0.006] between first and second semesters (p=0.045); and stress - ANOVA: [F = 2.968, p=0.001] between seventh and twelfth (p=0.044), tenth and twelfth (p=0.011), and eleventh and twelfth (p=0.001) semesters. The following factors were associated with (a) stress: female gender, anxiety, and depression; (b) depression: female gender, intrinsic religiosity, anxiety, and stress; and (c) anxiety: course semester, depression, and stress.\n\n\nCONCLUSION\nOur findings revealed high levels of depression, anxiety, and stress symptoms in medical students, with marked differences among course semesters. Gender and religiosity appeared to influence the mental health of the medical students.",
"title": ""
},
{
"docid": "2cd1edeccd5d8b2f8471864a938e7438",
"text": "A large body of evidence supports the hypothesis that mesolimbic dopamine (DA) mediates, in animal models, the reinforcing effects of central nervous system stimulants such as cocaine and amphetamine. The role DA plays in mediating amphetamine-type subjective effects of stimulants in humans remains to be established. Both amphetamine and cocaine increase norepinephrine (NE) via stimulation of release and inhibition of reuptake, respectively. If increases in NE mediate amphetamine-type subjective effects of stimulants in humans, then one would predict that stimulant medications that produce amphetamine-type subjective effects in humans should share the ability to increase NE. To test this hypothesis, we determined, using in vitro methods, the neurochemical mechanism of action of amphetamine, 3,4-methylenedioxymethamphetamine (MDMA), (+)-methamphetamine, ephedrine, phentermine, and aminorex. As expected, their rank order of potency for DA release was similar to their rank order of potency in published self-administration studies. Interestingly, the results demonstrated that the most potent effect of these stimulants is to release NE. Importantly, the oral dose of these stimulants, which produce amphetamine-type subjective effects in humans, correlated with the their potency in releasing NE, not DA, and did not decrease plasma prolactin, an effect mediated by DA release. These results suggest that NE may contribute to the amphetamine-type subjective effects of stimulants in humans.",
"title": ""
},
{
"docid": "ced0dfa1447b86cc5af2952012960511",
"text": "OBJECTIVE\nThe pathophysiology of peptic ulcer disease (PUD) in liver cirrhosis (LC) and chronic hepatitis has not been established. The aim of this study was to assess the role of portal hypertension from PUD in patients with LC and chronic hepatitis.\n\n\nMATERIALS AND METHODS\nWe analyzed the medical records of 455 hepatic vein pressure gradient (HVPG) and esophagogastroduodenoscopy patients who had LC or chronic hepatitis in a single tertiary hospital. The association of PUD with LC and chronic hepatitis was assessed by univariate and multivariate analysis.\n\n\nRESULTS\nA total of 72 PUD cases were detected. PUD was associated with LC more than with chronic hepatitis (odds ratio [OR]: 4.13, p = 0.03). In the univariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 4.34, p = 0.04) and smoking was associated with PUD in patients with chronic hepatitis (OR: 3.61, p = 0.04). In the multivariate analysis, taking an ulcerogenic medication was associated with PUD in patients with LC (OR: 2.93, p = 0.04). However, HVPG was not related to PUD in patients with LC or chronic hepatitis.\n\n\nCONCLUSION\nAccording to the present study, patients with LC have a higher risk of PUD than those with chronic hepatitis. The risk factor was taking ulcerogenic medication. However, HVPG reflecting portal hypertension was not associated with PUD in LC or chronic hepatitis (Clinicaltrial number NCT01944878).",
"title": ""
},
{
"docid": "27c0c6c43012139fc3e4ee64ae043c0b",
"text": "This paper presents a method for measuring signal backscattering from RFID tags, and for calculating a tag's radar cross section (RCS). We derive a theoretical formula for the RCS of an RFID tag with a minimum-scattering antenna. We describe an experimental measurement technique, which involves using a network analyzer connected to an anechoic chamber with and without the tag. The return loss measured in this way allows us to calculate the backscattered power and to find the tag's RCS. Measurements were performed using an RFID tag operating in the UHF band. To determine whether the tag was turned on, we used an RFID tag tester. The tag's RCS was also calculated theoretically, using electromagnetic simulation software. The theoretical results were found to be in good agreement with experimental data",
"title": ""
},
{
"docid": "b37064e74a2c88507eacb9062996a911",
"text": "This article builds a theoretical framework to help explain governance patterns in global value chains. It draws on three streams of literature – transaction costs economics, production networks, and technological capability and firm-level learning – to identify three variables that play a large role in determining how global value chains are governed and change. These are: (1) the complexity of transactions, (2) the ability to codify transactions, and (3) the capabilities in the supply-base. The theory generates five types of global value chain governance – hierarchy, captive, relational, modular, and market – which range from high to low levels of explicit coordination and power asymmetry. The article highlights the dynamic and overlapping nature of global value chain governance through four brief industry case studies: bicycles, apparel, horticulture and electronics.",
"title": ""
},
{
"docid": "d77a9e08115ecda71a126819bb6012d4",
"text": "Music, an abstract stimulus, can arouse feelings of euphoria and craving, similar to tangible rewards that involve the striatal dopaminergic system. Using the neurochemical specificity of [11C]raclopride positron emission tomography scanning, combined with psychophysiological measures of autonomic nervous system activity, we found endogenous dopamine release in the striatum at peak emotional arousal during music listening. To examine the time course of dopamine release, we used functional magnetic resonance imaging with the same stimuli and listeners, and found a functional dissociation: the caudate was more involved during the anticipation and the nucleus accumbens was more involved during the experience of peak emotional responses to music. These results indicate that intense pleasure in response to music can lead to dopamine release in the striatal system. Notably, the anticipation of an abstract reward can result in dopamine release in an anatomical pathway distinct from that associated with the peak pleasure itself. Our results help to explain why music is of such high value across all human societies.",
"title": ""
}
] | scidocsrr |
99fdc4ef43c759bc406f8ab245864965 | Hate Speech Detection: A Solved Problem? The Challenging Case of Long Tail on Twitter | [
{
"docid": "522363d36c93b692265c42f9f3976461",
"text": "In this paper, we propose a novel semi-supervised approach for detecting profanity-related offensive content in Twitter. Our approach exploits linguistic regularities in profane language via statistical topic modeling on a huge Twitter corpus, and detects offensive tweets using automatically these generated features. Our approach performs competitively with a variety of machine learning (ML) algorithms. For instance, our approach achieves a true positive rate (TP) of 75.1% over 4029 testing tweets using Logistic Regression, significantly outperforming the popular keyword matching baseline, which has a TP of 69.7%, while keeping the false positive rate (FP) at the same level as the baseline at about 3.77%. Our approach provides an alternative to large scale hand annotation efforts required by fully supervised learning approaches.",
"title": ""
},
{
"docid": "9a52461cbd746e4e1df5748af37b58ed",
"text": "Irony is a pervasive aspect of many online texts, one made all the more difficult by the absence of face-to-face contact and vocal intonation. As our media increasingly become more social, the problem of irony detection will become even more pressing. We describe here a set of textual features for recognizing irony at a linguistic level, especially in short texts created via social media such as Twitter postings or ‘‘tweets’’. Our experiments concern four freely available data sets that were retrieved from Twitter using content words (e.g. ‘‘Toyota’’) and user-generated tags (e.g. ‘‘#irony’’). We construct a new model of irony detection that is assessed along two dimensions: representativeness and relevance. Initial results are largely positive, and provide valuable insights into the figurative issues facing tasks such as sentiment analysis, assessment of online reputations, or decision making.",
"title": ""
},
{
"docid": "79ece5e02742de09b01908668383e8f2",
"text": "Hate speech in the form of racist and sexist remarks are a common occurrence on social media. For that reason, many social media services address the problem of identifying hate speech, but the definition of hate speech varies markedly and is largely a manual effort (BBC, 2015; Lomas, 2015). We provide a list of criteria founded in critical race theory, and use them to annotate a publicly available corpus of more than 16k tweets. We analyze the impact of various extra-linguistic features in conjunction with character n-grams for hatespeech detection. We also present a dictionary based the most indicative words in our data.",
"title": ""
},
{
"docid": "18403ce2ebb83b9207a7cece82e91ffc",
"text": "Hate speech in the form of racism and sexism is commonplace on the internet (Waseem and Hovy, 2016). For this reason, there has been both an academic and an industry interest in detection of hate speech. The volume of data to be reviewed for creating data sets encourages a use of crowd sourcing for the annotation efforts. In this paper, we provide an examination of the influence of annotator knowledge of hate speech on classification models by comparing classification results obtained from training on expert and amateur annotations. We provide an evaluation on our own data set and run our models on the data set released by Waseem and Hovy (2016). We find that amateur annotators are more likely than expert annotators to label items as hate speech, and that systems trained on expert annotations outperform systems trained on amateur annotations.",
"title": ""
},
{
"docid": "05696249c57c4b0a52ddfd5598a34f00",
"text": "The quality of word representations is frequently assessed using correlation with human judgements of word similarity. Here, we question whether such intrinsic evaluation can predict the merits of the representations for downstream tasks. We study the correlation between results on ten word similarity benchmarks and tagger performance on three standard sequence labeling tasks using a variety of word vectors induced from an unannotated corpus of 3.8 billion words, and demonstrate that most intrinsic evaluations are poor predictors of downstream performance. We argue that this issue can be traced in part to a failure to distinguish specific similarity from relatedness in intrinsic evaluation datasets. We make our evaluation tools openly available to facilitate further study.",
"title": ""
}
] | [
{
"docid": "e882efea987b4f248c0374c1555c668a",
"text": "This paper describes the Sonic Banana, a bend-sensor based alternative MIDI controller.",
"title": ""
},
{
"docid": "759f38a59c5cd0768b3de553ec987bc0",
"text": "In this paper we describe a database of static images of human faces. Images were taken in uncontrolled indoor environment using five video surveillance cameras of various qualities. Database contains 4,160 static images (in visible and infrared spectrum) of 130 subjects. Images from different quality cameras should mimic real-world conditions and enable robust face recognition algorithms testing, emphasizing different law enforcement and surveillance use case scenarios. In addition to database description, this paper also elaborates on possible uses of the database and proposes a testing protocol. A baseline Principal Component Analysis (PCA) face recognition algorithm was tested following the proposed protocol. Other researchers can use these test results as a control algorithm performance score when testing their own algorithms on this dataset. Database is available to research community through the procedure described at www.scface.org .",
"title": ""
},
{
"docid": "869f492020b06dbd7795251858beb6f7",
"text": "Multimodal wearable sensor data classification plays an important role in ubiquitous computing and has a wide range of applications in scenarios from healthcare to entertainment. However, most existing work in this field employs domain-specific approaches and is thus ineffective in complex situations where multi-modality sensor data are collected. Moreover, the wearable sensor data are less informative than the conventional data such as texts or images. In this paper, to improve the adaptability of such classification methods across different application domains, we turn this classification task into a game and apply a deep reinforcement learning scheme to deal with complex situations dynamically. Additionally, we introduce a selective attention mechanism into the reinforcement learning scheme to focus on the crucial dimensions of the data. This mechanism helps to capture extra information from the signal and thus it is able to significantly improve the discriminative power of the classifier. We carry out several experiments on three wearable sensor datasets and demonstrate the competitive performance of the proposed approach compared to several state-of-the-art baselines.",
"title": ""
},
{
"docid": "ff0644de5cd474dbd858c96bb4c76fd9",
"text": "With the growth of the Internet of Things, many insecure embedded devices are entering into our homes and businesses. Some of these web-connected devices lack even basic security protections such as secure password authentication. As a result, thousands of IoT devices have already been infected with malware and enlisted into malicious botnets and many more are left vulnerable to exploitation. In this paper we analyze the practical security level of 16 popular IoT devices from high-end and low-end manufacturers. We present several low-cost black-box techniques for reverse engineering these devices, including software and fault injection based techniques for bypassing password protection. We use these techniques to recover device rmware and passwords. We also discover several common design aws which lead to previously unknown vulnerabilities. We demonstrate the e ectiveness of our approach by modifying a laboratory version of the Mirai botnet to automatically include these devices. We also discuss how to improve the security of IoT devices without signi cantly increasing their cost.",
"title": ""
},
{
"docid": "349773087b8d196f1e9e83463018a52b",
"text": "We introduce two appearance-based methods for clustering a set of images of 3-D objects, acquired under varying illumination conditions, into disjoint subsets corresponding to individual objects. The first algorithm is based on the concept of illumination cones. According to the theory, the clustering problem is equivalent to finding convex polyhedral cones in the high-dimensional image space. To efficiently determine the conic structures hidden in the image data, we introduce the concept of conic affinity which measures the likelihood of a pair of images belonging to the same underlying polyhedral cone. For the second method, we introduce another affinity measure based on image gradient comparisons. The algorithm operates directly on the image gradients by comparing the magnitudes and orientations of the image gradient at each pixel. Both methods have clear geometric motivations, and they operate directly on the images without the need for feature extraction or computation of pixel statistics. We demonstrate experimentally that both algorithms are surprisingly effective in clustering images acquired under varying illumination conditions with two large, well-known image data sets.",
"title": ""
},
{
"docid": "b82750baa5a775a00b72e19d3fd5d2a1",
"text": "We assessed the rate of detection rate of recurrent prostate cancer by PET/CT using anti-3-18F-FACBC, a new synthetic amino acid, in comparison to that using 11C-choline as part of an ongoing prospective single-centre study. Included in the study were 15 patients with biochemical relapse after initial radical treatment of prostate cancer. All the patients underwent anti-3-18F-FACBC PET/CT and 11C-choline PET/CT within a 7-day period. The detection rates using the two compounds were determined and the target–to-background ratios (TBR) of each lesion are reported. No adverse reactions to anti-3-18F-FACBC PET/CT were noted. On a patient basis, 11C-choline PET/CT was positive in 3 patients and negative in 12 (detection rate 20 %), and anti-3-18F-FACBC PET/CT was positive in 6 patients and negative in 9 (detection rate 40 %). On a lesion basis, 11C-choline detected 6 lesions (4 bone, 1 lymph node, 1 local relapse), and anti-3-18F-FACBC detected 11 lesions (5 bone, 5 lymph node, 1 local relapse). All 11C-choline-positive lesions were also identified by anti-3-18F-FACBC PET/CT. The TBR of anti-3-18F-FACBC was greater than that of 11C-choline in 8/11 lesions, as were image quality and contrast. Our preliminary results indicate that anti-3-18F-FACBC may be superior to 11C-choline for the identification of disease recurrence in the setting of biochemical failure. Further studies are required to assess efficacy of anti-3-18F-FACBC in a larger series of prostate cancer patients.",
"title": ""
},
{
"docid": "61f9711b65d142b5537b7d3654bbbc3c",
"text": "Now-a-days as there is prohibitive demand for agricultural industry, effective growth and improved yield of fruit is necessary and important. For this purpose farmers need manual monitoring of fruits from harvest till its progress period. But manual monitoring will not give satisfactory result all the times and they always need satisfactory advice from expert. So it requires proposing an efficient smart farming technique which will help for better yield and growth with less human efforts. We introduce a technique which will diagnose and classify external disease within fruits. Traditional system uses thousands of words which lead to boundary of language. Whereas system that we have come up with, uses image processing techniques for implementation as image is easy way for conveying. In the proposed work, OpenCV library is applied for implementation. K-means clustering method is applied for image segmentation, the images are catalogue and mapped to their respective disease categories on basis of four feature vectors color, morphology, texture and structure of hole on the fruit. The system uses two image databases, one for implementation of query images and the other for training of already stored disease images. Artificial Neural Network (ANN) concept is used for pattern matching and classification of diseases.",
"title": ""
},
{
"docid": "48ea93efe1a1219bfb1a6b48c20bab99",
"text": "Understanding the content of user's image posts is a particularly interesting problem in social networks and web settings. Current machine learning techniques focus mostly on curated training sets of image-label pairs, and perform image classification given the pixels within the image. In this work we instead leverage the wealth of information available from users: firstly, we employ user hashtags to capture the description of image content; and secondly, we make use of valuable contextual information about the user. We show how user metadata (age, gender, etc.) combined with image features derived from a convolutional neural network can be used to perform hashtag prediction. We explore two ways of combining these heterogeneous features into a learning framework: (i) simple concatenation; and (ii) a 3-way multiplicative gating, where the image model is conditioned on the user metadata. We apply these models to a large dataset of de-identified Facebook posts and demonstrate that modeling the user can significantly improve the tag prediction quality over current state-of-the-art methods.",
"title": ""
},
{
"docid": "5663c9fc6eb66c718235e51d8932dab4",
"text": "As the number of academic papers and new technologies soars, it has been increasingly difficult for researchers, especially beginners, to enter a new research field. Researchers often need to study a promising paper in depth to keep up with the forefront of technology. Traditional Query-Oriented study method is time-consuming and even tedious. For a given paper, existent academic search engines like Google Scholar tend to recommend relevant papers, failing to reveal the knowledge structure. The state-of-the-art MapOriented study methods such as AMiner and AceMap can structure scholar information, but they’re too coarse-grained to dig into the underlying principles of a specific paper. To address this problem, we propose a Study-Map Oriented method and a novel model called RIDP (Reference Injection based Double-Damping PageRank) to help researchers study a given paper more efficiently and thoroughly. RIDP integrates newly designed Reference Injection based Topic Analysis method and Double-Damping PageRank algorithm to mine a Study Map out of massive academic papers in order to guide researchers to dig into the underlying principles of a specific paper. Experiment results on real datasets and pilot user studies indicate that our method can help researchers acquire knowledge more efficiently, and grasp knowledge structure systematically.",
"title": ""
},
{
"docid": "6f66eebbe5408c3f4d5118b639fcfec0",
"text": "Various types of incidents and disasters cause huge loss to people's lives and property every year and highlight the need to improve our capabilities to handle natural, health, and manmade emergencies. How to develop emergency management systems that can provide critical decision support to emergency management personnel is considered a crucial issue by researchers and practitioners. Governments, such as the USA, the European Commission, and China, have recognized the importance of emergency management and funded national level emergency management projects during the past decade. Multi-criteria decision making (MCDM) refers to the study of methods and procedures by which concerns about multiple and often competing criteria can be formally incorporated into the management planning process. Over the years, it has evolved as an important field of Operations Research, focusing on issues as: analyzing and evaluating of incompatible criteria and alternatives; modeling decision makers' preferences; developing MCDM-based decision support systems; designing MCDM research paradigm; identifying compromising solutions of multi-criteria decision making problems. İn emergency management, MCDM can be used to evaluate decision alternatives and assist decision makers in making immediate and effective responses under pressures and uncertainties. However, although various approaches and technologies have been developed in the MCDM field to handle decision problems with conflicting criteria in some domains, effective decision support in emergency management requires in depth analysis of current MCDM methods and techniques, and adaptation of these techniques specifically for emergency management. In terms of this basic fact, the guest editors determined that the purpose of this special issue should be “to assess the current state of knowledge about MCDM in emergency management and to generate and throw open for discussion, more ideas, hypotheses and theories, the specific objective being to determine directions for further research”. For this purpose, this special issue presents some new progress about MCDM in emergency management that is expected to trigger thought and deepen further research. For this purpose, 11 papers [1–11] were selected from 41 submissions related to MCDM in emergency management from different countries and regions. All the selected papers went through a standard review process of the journal and the authors of all the papers made necessary revision in terms of reviewing comments. In the selected 11 papers, they can be divided into three categories. The first category focuses on innovative MCDM methods for logistics management, which includes 3 papers. The first paper written by Liberatore et al. [1] is to propose a hierarchical compromise model called RecHADS method for the joint optimization of recovery operations and distribution of emergency goods in humanitarian logistics. In the second paper, Peng et al. [2] applies a system dynamics disruption analysis approach for inventory and logistics planning in the post-seismic supply chain risk management. In the third paper, Rath and Gutjahr [3] present an exact solution method and a mathheuristic method to solve the warehouse location routing problem in disaster relief and obtained the good performance. In the second category, 4 papers about the MCDM-based risk assessment and risk decision-making methods in emergency response and emergency management are selected. In terms of the previous order, the fourth paper [4] is to integrate TODIM method and FSE method to formulate a new TODIM-FSE method for risk decision-making support in oil spill response. The fifth paper [5] is to utilize a fault tree analysis (FTA) method to give a risk decision-making solution to emergency response, especially in the case of the H1N1 infectious diseases. Similarly, the sixth paper [6] focuses on an analytic network process (ANP) method for risk assessment and decision analysis, and while the seventh paper [7] applies cumulative prospect theory (CPT) method to risk decision analysis in emergency response. The papers in the third category emphasize on the MCDM methods for disaster assessment and emergence management and four papers are included into this category. In the similar order, the eighth paper [8] is to propose a multi-event and multi-criteria method to evaluate the situation of disaster resilience. In the ninth paper, Kou et al. [9] develop an integrated expert system for fast disaster assessment and obtain the good evaluation performance. Similarly, the 10th paper [10] proposes a multi-objective programming approach to make the optimal decisions for oil-importing plan considering country risk with extreme events. Finally, the last paper [11] in this special issue is to develop a community-based collaborative information system to manage natural and manmade disasters. The guest editors hope that the papers published in this special issue would be of value to academic researchers and business practitioners and would provide a clearer sense of direction for further research, as well as facilitating use of existing methodologies in a more productive manner. The guest editors would like to place on record their sincere thanks to Prof. Stefan Nickel, the Editor-in-Chief of Computers & Operations Research, for this very special opportunity provided to us for contributing to this special issue. The guest editors have to thank all the referees for their kind support and help. Last, but not least, the guest editors would express the gratitude to all authors of submissions in this special issue for their contribution. Without the support of the authors and the referees, it would have been",
"title": ""
},
{
"docid": "be01b960154a975a36ad568cf17b5aca",
"text": "ing Interactions Based on Message Sets Svend Frr 1 and Gul Agha 2. 1 Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94303 2 University of Illinois, 1304 W. Springfield Avenue, Urbana, IL 61801 Abs t rac t . An important requirement of programming languages for distributed systems is to provide abstractions for coordination. A common type of coordination requires reactivity in response to arbitrary communication patterns. We have developed a communication model in which concurrent objects can be activated by sets of messages. Specifically, our model allows direct and abstract expression of common interaction patterns found in concurrent systems. For example, the model captures multiple clients that collectively invoke shared servers as a single activation. Furthermore, it supports definition of individual clients that concurrently invoke multiple servers and wait for subsets of the returned reply messages. Message sets are dynamically defined using conjunctive and disjunctive combinators that may depend o n the patterns of messages. The model subsumes existing models for multiRPC and multi-party synchronization within a single, uniform activation framework. 1 I n t r o d u c t i o n Distributed objects are often reactive, i.e. they carry out their actions in response to received response. Tradit ional object-oriented languages require one to one correspondence between response and a receive message: i.e. each response is caused by exactly one message. However, many coordination schemes involve object behaviors whose logical cause is a set of messages rather than a single message. For example, consider a transaction manager in a distributed database system. In order to commit a distributed transaction, the manager must coordinate the action taken at each site involved in the transaction. A two-phase commit protocol is a possible implementation of this coordination pattern. In carrying out a two-phase commit protocol, the manager first sends out a status inquiry to all the sites involved. In response to a status inquiry, each site sends a positive reply if it can commit the transaction; a site sends back a negative reply if it cannot commit the transaction. After sending out inquiries, the manager becomes a reactive object waiting for sites to reply. The logical structure of the manager is to react to a set of replies rather than a single reply: if a positive reply is received from all sites, the manager decides to commit the transaction; if a negative reply is received from any site, the manager must abort the transaction. In tradit ional object-oriented languages, the programmer must implement a response to a set of messages in terms of sequences of responses to single messages. * The reported work was carried out while the first author was affiliated with the University of Illinois. The current emaJl addresses are f rolund@hpl .hp. corn and [email protected], edu",
"title": ""
},
{
"docid": "d56ff4b194c123b19a335e00b38ea761",
"text": "As the automobile industry evolves, a number of in-vehicle communication protocols are developed for different in-vehicle applications. With the emerging new applications towards Internet of Things (IoT), a more integral solution is needed to enable the pervasiveness of intra- and inter-vehicle communications. In this survey, we first introduce different classifications of automobile applications with focus on their bandwidth and latency. Then we survey different in-vehicle communication bus protocols including both legacy protocols and emerging Ethernet. In addition, we highlight our contribution in the field to employ power line as the in-vehicle communication medium. We believe power line communication will play an important part in future automobile which can potentially reduce the amount of wiring, simplify design and reduce cost. Based on these technologies, we also introduce some promising applications in future automobile enabled by the development of in-vehicle network. Finally, We will share our view on how the in-vehicle network can be merged into the future IoT.",
"title": ""
},
{
"docid": "1407b7bd4f597dd64642150629349e5e",
"text": "This paper presents a general trainable framework for object detection in static images of cluttered scenes. The detection technique we develop is based on a wavelet representation of an object class derived from a statistical analysis of the class instances. By learning an object class in terms of a subset of an overcomplete dictionary of wavelet basis functions, we derive a compact representation of an object class which is used as an input to a suppori vector machine classifier. This representation overcomes both the problem of in-class variability and provides a low false detection rate in unconstrained environments. We demonstrate the capabilities of the technique i n two domains whose inherent information content differs significantly. The first system is face detection and the second is the domain of people which, in contrast to faces, vary greatly in color, texture, and patterns. Unlike previous approaches, this system learns from examples and does not rely on any a priori (handcrafted) models or motion-based segmentation. The paper also presents a motion-based extension to enhance the performance of the detection algorithm over video sequences. The results presented here suggest that this architecture may well be quite general.",
"title": ""
},
{
"docid": "e632895c1ab1b994f64ef03260b91acb",
"text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.",
"title": ""
},
{
"docid": "34fdd06eb5e5d2bf9266c6852710bed2",
"text": "If subjects are shown an angry face as a target visual stimulus for less than forty milliseconds and are then immediately shown an expressionless mask, these subjects report seeing the mask but not the target. However, an aversively conditioned masked target can elicit an emotional response from subjects without being consciously perceived,. Here we study the mechanism of this unconsciously mediated emotional learning. We measured neural activity in volunteer subjects who were presented with two angry faces, one of which, through previous classical conditioning, was associated with a burst of white noise. In half of the trials, the subjects' awareness of the angry faces was prevented by backward masking with a neutral face. A significant neural response was elicited in the right, but not left, amygdala to masked presentations of the conditioned angry face. Unmasked presentations of the same face produced enhanced neural activity in the left, but not right, amygdala. Our results indicate that, first, the human amygdala can discriminate between stimuli solely on the basis of their acquired behavioural significance, and second, this response is lateralized according to the subjects' level of awareness of the stimuli.",
"title": ""
},
{
"docid": "dcac926ace799d43fedb9c27056a7729",
"text": "Jinsight is a tool for exploring a program’s run-time behavior visually. It is helpful for performance analysis, debugging, and any task in which you need to better understand what your Java program is really doing. Jinsight is designed specifically with object-oriented and multithreaded programs in mind. It exposes many facets of program behavior that elude conventional tools. It reveals object lifetimes and communication, and attendant performance bottlenecks. It shows thread interactions, deadlocks, and garbage collector activity. It can also help you find and fix memory leaks, which remain a hazard despite garbage collection. A user explores program execution through one or more views. Jinsight offers several types of views, each geared toward distinct aspects of object-oriented and multithreaded program behavior. The user has several different perspectives from which to discern performance problems, unexpected behavior, or bugs small and large. Moreover, the views are linked to each other in many ways, allowing navigation from one view to another. Navigation makes the collection of views far more powerful than the sum of their individual strengths.",
"title": ""
},
{
"docid": "73b76fa13443a4c285dc9a97cfaa22dd",
"text": "As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a general mechanism, called packet leashes, for detecting and, thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes. We also discuss topology-based wormhole detection, and show that it is impossible for these approaches to detect some wormhole topologies.",
"title": ""
},
{
"docid": "cd8de770f7c6dbe897d308d0cec23dc0",
"text": "We present Tartanian, a game theory-based player for headsup no-limit Texas Hold’em poker. Tartanian is built from three components. First, to deal with the virtually infinite strategy space of no-limit poker, we develop a discretized betting model designed to capture the most important strategic choices in the game. Second, we employ potential-aware automated abstraction algorithms for identifying strategically similar situations in order to decrease the size of the game tree. Third, we develop a new technique for automatically generating the source code of an equilibrium-finding algorithm from an XML-based description of a game. This automatically generated program is more efficient than what would be possible with a general-purpose equilibrium-finding program. Finally, we present results from the AAAI-07 Computer Poker Competition, in which Tartanian placed second out of ten entries.",
"title": ""
},
{
"docid": "0e98010ded0712ab0e2f78af0a476c86",
"text": "This paper presents a system that uses symbolic representations of audio concepts as words for the descriptions of audio tracks, that enable it to go beyond the state of the art, which is audio event classification of a small number of audio classes in constrained settings, to large-scale classification in the wild. These audio words might be less meaningful for an annotator but they are descriptive for computer algorithms. We devise a random-forest vocabulary learning method with an audio word weighting scheme based on TF-IDF and TD-IDD, so as to combine the computational simplicity and accurate multi-class classification of the random forest with the data-driven discriminative power of the TF-IDF/TD-IDD methods. The proposed random forest clustering with text-retrieval methods significantly outperforms two state-of-the-art methods on the dry-run set and the full set of the TRECVID MED 2010 dataset.",
"title": ""
},
{
"docid": "1df27c9c3cdccd66eadb8916cb5f7283",
"text": "Network function virtualization (NFV) is a promising technique aimed at reducing capital expenditures (CAPEX) and operating expenditures (OPEX), and improving the flexibility and scalability of an entire network. In contrast to traditional dispatching, NFV can separate network functions from proprietary infrastructure and gather these functions into a resource pool that can efficiently modify and adjust service function chains (SFCs). However, this emerging technique has some challenges. A major problem is reliability, which involves ensuring the availability of deployed SFCs, namely, the probability of successfully chaining a series of virtual network functions while considering both the feasibility and the specific requirements of clients, because the substrate network remains vulnerable to earthquakes, floods, and other natural disasters. Based on the premise of users’ demands for SFC requirements, we present an ensure reliability cost saving algorithm to reduce the CAPEX and OPEX of telecommunication service providers by reducing the reliability of the SFC deployments. The results of extensive experiments indicate that the proposed algorithms perform efficiently in terms of the blocking ratio, resource consumption, time consumption, and the first block.",
"title": ""
}
] | scidocsrr |
ae969b4380a452408f920e23e7508508 | Implementing Gender-Dependent Vowel-Level Analysis for Boosting Speech-Based Depression Recognition | [
{
"docid": "b66be42a294208ec31d44e57ae434060",
"text": "Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian pdfs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random Emotional speech recognition aims to automatically classify speech units (e.g., utterances) into emotional states, such as anger, happiness, neutral, sadness and surprise. The major contribution of this paper is to rate the discriminating capability of a set of features for emotional speech recognition when gender information is taken into consideration. A total of 87 features has been calculated over 500 utterances of the Danish Emotional Speech database. The Sequential Forward Selection method (SFS) has been used in order to discover the 5-10 features which are able to classify the samples in the best way for each gender. The criterion used in SFS is the crossvalidated correct classification rate of a Bayes classifier where the class probability distribution functions (pdfs) are approximated via Parzen windows or modeled as Gaussians. When a Bayes classifier with Gaussian PDFs is employed, a correct classification rate of 61.1% is obtained for male subjects and a corresponding rate of 57.1% for female ones. In the same experiment, a random classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.classification would result in a correct classification rate of 20%. When gender information is not considered a correct classification score of 50.6% is obtained.",
"title": ""
},
{
"docid": "80bf80719a1751b16be2420635d34455",
"text": "Mood disorders are inherently related to emotion. In particular, the behaviour of people suffering from mood disorders such as unipolar depression shows a strong temporal correlation with the affective dimensions valence, arousal and dominance. In addition to structured self-report questionnaires, psychologists and psychiatrists use in their evaluation of a patient's level of depression the observation of facial expressions and vocal cues. It is in this context that we present the fourth Audio-Visual Emotion recognition Challenge (AVEC 2014). This edition of the challenge uses a subset of the tasks used in a previous challenge, allowing for more focussed studies. In addition, labels for a third dimension (Dominance) have been added and the number of annotators per clip has been increased to a minimum of three, with most clips annotated by 5. The challenge has two goals logically organised as sub-challenges: the first is to predict the continuous values of the affective dimensions valence, arousal and dominance at each moment in time. The second is to predict the value of a single self-reported severity of depression indicator for each recording in the dataset. This paper presents the challenge guidelines, the common data used, and the performance of the baseline system on the two tasks.",
"title": ""
}
] | [
{
"docid": "6e4d8bde993e88fa2c729d2fafb6fd90",
"text": "The plant hormones gibberellin and abscisic acid regulate gene expression, secretion and cell death in aleurone. The emerging picture is of gibberellin perception at the plasma membrane whereas abscisic acid acts at both the plasma membrane and in the cytoplasm - although gibberellin and abscisic acid receptors have yet to be identified. A range of downstream-signalling components and events has been implicated in gibberellin and abscisic acid signalling in aleurone. These include the Galpha subunit of a heterotrimeric G protein, a transient elevation in cGMP, Ca2+-dependent and Ca2+-independent events in the cytoplasm, reversible protein phosphory-lation, and several promoter cis-elements and transcription factors, including GAMYB. In parallel, molecular genetic studies on mutants of Arabidopsis that show defects in responses to these hormones have identified components of gibberellin and abscisic acid signalling. These two approaches are yielding results that raise the possibility that specific gibberellin and abscisic acid signalling components perform similar functions in aleurone and other tissues.",
"title": ""
},
{
"docid": "45d551e2d813c37e032b90799c71f4c1",
"text": "A process is described to produce single sheets of functionalized graphene through thermal exfoliation of graphite oxide. The process yields a wrinkled sheet structure resulting from reaction sites involved in oxidation and reduction processes. The topological features of single sheets, as measured by atomic force microscopy, closely match predictions of first-principles atomistic modeling. Although graphite oxide is an insulator, functionalized graphene produced by this method is electrically conducting.",
"title": ""
},
{
"docid": "6c1d3eb9d3e39b25f32b77942b04d165",
"text": "The aim of this study is to investigate the factors influencing the consumer acceptance of mobile banking in Bangladesh. The demographic, attitudinal, and behavioural characteristics of mobile bank users were examined. 292 respondents from seven major mobile financial service users of different mobile network operators participated in the consumer survey. Infrastructural facility, selfcontrol, social influence, perceived risk, ease of use, need for interaction, perceived usefulness, and customer service were found to influence consumer attitudes towards mobile banking services. The infrastructural facility of updated user friendly technology and its availability was found to be the most important factor that motivated consumers’ attitudes in Bangladesh towards mobile banking. The sample size was not necessarily representative of the Bangladeshi population as a whole as it ignored large rural population. This study identified two additional factors i.e. infrastructural facility and customer service relevant to mobile banking that were absent in previous researches. By addressing the concerns of and benefits sought by the consumers, marketers can create positive attractions and policy makers can set regulations for the expansion of mobile banking services in Bangladesh. This study offers an insight into mobile banking in Bangladesh focusing influencing factors, which has not previously been investigated.",
"title": ""
},
{
"docid": "7113e007073184671d0bf5c9bdda1f5c",
"text": "It is widely accepted that mineral flotation is a very challenging control problem due to chaotic nature of process. This paper introduces a novel approach of combining multi-camera system and expert controllers to improve flotation performance. The system has been installed into the zinc circuit of Pyhäsalmi Mine (Finland). Long-term data analysis in fact shows that the new approach has improved considerably the recovery of the zinc circuit, resulting in a substantial increase in the mill’s annual profit. r 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "7d1a7bc7809a578cd317dfb8ba5b7678",
"text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.",
"title": ""
},
{
"docid": "e6ca4f592446163124bcf00f87ccb8df",
"text": "A full-vector beam propagation method based on a finite-element scheme for a helicoidal system is developed. The permittivity and permeability tensors of a straight waveguide are replaced with equivalent ones for a helicoidal system, obtained by transformation optics. A cylindrical, perfectly matched layer is implemented for the absorbing boundary condition. To treat wide-angle beam propagation, a second-order differentiation term with respect to the propagation direction is directly discretized without using a conventional Padé approximation. The transmission spectra of twisted photonic crystal fibers are thoroughly investigated, and it is found that the diameters of the air holes greatly affect the spectra. The calculated results are in good agreement with the recently reported measured results, showing the validity and usefulness of the method developed here.",
"title": ""
},
{
"docid": "a6a7770857964e96f98bd4021d38f59f",
"text": "During human evolutionary history, there were \"trade-offs\" between expending time and energy on child-rearing and mating, so both men and women evolved conditional mating strategies guided by cues signaling the circumstances. Many short-term matings might be successful for some men; others might try to find and keep a single mate, investing their effort in rearing her offspring. Recent evidence suggests that men with features signaling genetic benefits to offspring should be preferred by women as short-term mates, but there are trade-offs between a mate's genetic fitness and his willingness to help in child-rearing. It is these circumstances and the cues that signal them that underlie the variation in short- and long-term mating strategies between and within the sexes.",
"title": ""
},
{
"docid": "b9f7c203717e6d2b0677b51f55e614f2",
"text": "This paper demonstrates a computer-aided diagnosis (CAD) system for lung cancer classification of CT scans with unmarked nodules, a dataset from the Kaggle Data Science Bowl 2017. Thresholding was used as an initial segmentation approach to segment out lung tissue from the rest of the CT scan. Thresholding produced the next best lung segmentation. The initial approach was to directly feed the segmented CT scans into 3D CNNs for classification, but this proved to be inadequate. Instead, a modified U-Net trained on LUNA16 data (CT scans with labeled nodules) was used to first detect nodule candidates in the Kaggle CT scans. The U-Net nodule detection produced many false positives, so regions of CTs with segmented lungs where the most likely nodule candidates were located as determined by the U-Net output were fed into 3D Convolutional Neural Networks (CNNs) to ultimately classify the CT scan as positive or negative for lung cancer. The 3D CNNs produced a test set Accuracy of 86.6%. The performance of our CAD system outperforms the current CAD systems in literature which have several training and testing phases that each requires a lot of labeled data, while our CAD system has only three major phases (segmentation, nodule candidate detection, and malignancy classification), allowing more efficient training and detection and more generalizability to other cancers. Keywords—Lung Cancer; Computed Tomography; Deep Learning; Convolutional Neural Networks; Segmentation.",
"title": ""
},
{
"docid": "883a22f7036514d87ce3af86b5853de3",
"text": "A wideband integrated RF duplexer supports 3G/4G bands I, II, III, IV, and IX, and achieves a TX-to-RX isolation of more than 55dB in the transmit-band, and greater than 45dB in the corresponding receive-band across 200MHz of bandwidth. A 65nm CMOS duplexer/LNA achieves a transmit insertion loss of 2.5dB, and a cascaded receiver noise figure of 5dB with more than 27dB of gain, exceeding the commercial external duplexers performance at considerably lower cost and area.",
"title": ""
},
{
"docid": "1f8128a4a525f32099d4fefe4bea1212",
"text": "Information overload on the Web has created enormous challenges to customers selecting products for online purchases and to online businesses attempting to identify customers’ preferences efficiently. Various recommender systems employing different data representations and recommendation methods are currently used to address these challenges. In this research, we developed a graph model that provides a generic data representation and can support different recommendation methods. To demonstrate its usefulness and flexibility, we developed three recommendation methods: direct retrieval, association mining, and high-degree association retrieval. We used a data set from an online bookstore as our research test-bed. Evaluation results showed that combining product content information and historical customer transaction information achieved more accurate predictions and relevant recommendations than using only collaborative information. However, comparisons among different methods showed that high-degree association retrieval did not perform significantly better than the association mining method or the direct retrieval method in our test-bed.",
"title": ""
},
{
"docid": "16fec520bf539ab23a5164ffef5561b4",
"text": "This article traces the major trends in TESOL methods in the past 15 years. It focuses on the TESOL profession’s evolving perspectives on language teaching methods in terms of three perceptible shifts: (a) from communicative language teaching to task-based language teaching, (b) from method-based pedagogy to postmethod pedagogy, and (c) from systemic discovery to critical discourse. It is evident that during this transitional period, the profession has witnessed a heightened awareness about communicative and task-based language teaching, about the limitations of the concept of method, about possible postmethod pedagogies that seek to address some of the limitations of method, about the complexity of teacher beliefs that inform the practice of everyday teaching, and about the vitality of the macrostructures—social, cultural, political, and historical—that shape the microstructures of the language classroom. This article deals briefly with the changes and challenges the trend-setting transition seems to be bringing about in the profession’s collective thought and action.",
"title": ""
},
{
"docid": "8a22f454a657768a3d5fd6e6ec743f5f",
"text": "In recent years, deep learning techniques have been developed to improve the performance of program synthesis from input-output examples. Albeit its significant progress, the programs that can be synthesized by state-of-the-art approaches are still simple in terms of their complexity. In this work, we move a significant step forward along this direction by proposing a new class of challenging tasks in the domain of program synthesis from input-output examples: learning a context-free parser from pairs of input programs and their parse trees. We show that this class of tasks are much more challenging than previously studied tasks, and the test accuracy of existing approaches is almost 0%. We tackle the challenges by developing three novel techniques inspired by three novel observations, which reveal the key ingredients of using deep learning to synthesize a complex program. First, the use of a non-differentiable machine is the key to effectively restrict the search space. Thus our proposed approach learns a neural program operating a domain-specific non-differentiable machine. Second, recursion is the key to achieve generalizability. Thus, we bake-in the notion of recursion in the design of our non-differentiable machine. Third, reinforcement learning is the key to learn how to operate the non-differentiable machine, but it is also hard to train the model effectively with existing reinforcement learning algorithms from a cold boot. We develop a novel two-phase reinforcement learningbased search algorithm to overcome this issue. In our evaluation, we show that using our novel approach, neural parsing programs can be learned to achieve 100% test accuracy on test inputs that are 500× longer than the training samples.",
"title": ""
},
{
"docid": "6e9edeffb12cf8e50223a933885bcb7c",
"text": "Reversible data hiding in encrypted images (RDHEI) is an effective technique to embed data in the encrypted domain. An original image is encrypted with a secret key and during or after its transmission, it is possible to embed additional information in the encrypted image, without knowing the encryption key or the original content of the image. During the decoding process, the secret message can be extracted and the original image can be reconstructed. In the last few years, RDHEI has started to draw research interest. Indeed, with the development of cloud computing, data privacy has become a real issue. However, none of the existing methods allow us to hide a large amount of information in a reversible manner. In this paper, we propose a new reversible method based on MSB (most significant bit) prediction with a very high capacity. We present two approaches, these are: high capacity reversible data hiding approach with correction of prediction errors and high capacity reversible data hiding approach with embedded prediction errors. With this method, regardless of the approach used, our results are better than those obtained with current state of the art methods, both in terms of reconstructed image quality and embedding capacity.",
"title": ""
},
{
"docid": "d34be0ce0f9894d6e219d12630166308",
"text": "The need for curricular reform in K-4 mathematics is clear. Such reform must address both the content and emphasis of the curriculum as well as approaches to instruction. A longstanding preoccupation with computation and other traditional skills has dominated both what mathematics is taught and the way mathematics is taught at this level. As a result, the present K-4 curriculum is narrow in scope; fails to foster mathematical insight, reasoning, and problem solving; and emphasizes rote activities. Even more significant is that children begin to lose their belief that learning mathematics is a sense-making experience. They become passive receivers of rules and procedures rather than active participants in creating knowledge.",
"title": ""
},
{
"docid": "9ffb4220530a4758ea6272edf6e7e531",
"text": "Process mining allows analysts to exploit logs of historical executions of business processes to extract insights regarding the actual performance of these processes. One of the most widely studied process mining operations is automated process discovery. An automated process discovery method takes as input an event log, and produces as output a business process model that captures the control-flow relations between tasks that are observed in or implied by the event log. Various automated process discovery methods have been proposed in the past two decades, striking different tradeoffs between scalability, accuracy, and complexity of the resulting models. However, these methods have been evaluated in an ad-hoc manner, employing different datasets, experimental setups, evaluation measures, and baselines, often leading to incomparable conclusions and sometimes unreproducible results due to the use of closed datasets. This article provides a systematic review and comparative evaluation of automated process discovery methods, using an open-source benchmark and covering 12 publicly-available real-life event logs, 12 proprietary real-life event logs, and nine quality metrics. The results highlight gaps and unexplored tradeoffs in the field, including the lack of scalability of some methods and a strong divergence in their performance with respect to the different quality metrics used.",
"title": ""
},
{
"docid": "fe0fa94ce6f02626fca12f21b60bec46",
"text": "Solid waste management (SWM) is a major public health and environmental concern in urban areas of many developing countries. Nairobi’s solid waste situation, which could be taken to generally represent the status which is largely characterized by low coverage of solid waste collection, pollution from uncontrolled dumping of waste, inefficient public services, unregulated and uncoordinated private sector and lack of key solid waste management infrastructure. This paper recapitulates on the public-private partnership as the best system for developing countries; challenges, approaches, practices or systems of SWM, and outcomes or advantages to the approach; the literature review focuses on surveying information pertaining to existing waste management methodologies, policies, and research relevant to the SWM. Information was sourced from peer-reviewed academic literature, grey literature, publicly available waste management plans, and through consultation with waste management professionals. Literature pertaining to SWM and municipal solid waste minimization, auditing and management were searched for through online journal databases, particularly Web of Science, and Science Direct. Legislation pertaining to waste management was also researched using the different databases. Additional information was obtained from grey literature and textbooks pertaining to waste management topics. After conducting preliminary research, prevalent references of select sources were identified and scanned for additional relevant articles. Research was also expanded to include literature pertaining to recycling, composting, education, and case studies; the manuscript summarizes with future recommendationsin terms collaborations of public/ private patternships, sensitization of people, privatization is important in improving processes and modernizing urban waste management, contract private sector, integrated waste management should be encouraged, provisional government leaders need to alter their mind set, prepare a strategic, integrated SWM plan for the cities, enact strong and adequate legislation at city and national level, evaluate the real impacts of waste management systems, utilizing locally based solutions for SWM service delivery and design, location, management of the waste collection centersand recycling and compositing activities should be",
"title": ""
},
{
"docid": "945dea6576c6131fc33cd14e5a2a0be8",
"text": "■ This article recounts the development of radar signal processing at Lincoln Laboratory. The Laboratory’s significant efforts in this field were initially driven by the need to provide detected and processed signals for air and ballistic missile defense systems. The first processing work was on the Semi-Automatic Ground Environment (SAGE) air-defense system, which led to algorithms and techniques for detection of aircraft in the presence of clutter. This work was quickly followed by processing efforts in ballistic missile defense, first in surface-acoustic-wave technology, in concurrence with the initiation of radar measurements at the Kwajalein Missile Range, and then by exploitation of the newly evolving technology of digital signal processing, which led to important contributions for ballistic missile defense and Federal Aviation Administration applications. More recently, the Laboratory has pursued the computationally challenging application of adaptive processing for the suppression of jamming and clutter signals. This article discusses several important programs in these areas.",
"title": ""
},
{
"docid": "8d4fdbdd76085391f2a80022f130459e",
"text": "Recently completed whole-genome sequencing projects marked the transition from gene-based phylogenetic studies to phylogenomics analysis of entire genomes. We developed an algorithm MGRA for reconstructing ancestral genomes and used it to study the rearrangement history of seven mammalian genomes: human, chimpanzee, macaque, mouse, rat, dog, and opossum. MGRA relies on the notion of the multiple breakpoint graphs to overcome some limitations of the existing approaches to ancestral genome reconstructions. MGRA also generates the rearrangement-based characters guiding the phylogenetic tree reconstruction when the phylogeny is unknown.",
"title": ""
},
{
"docid": "456327904250958baace54bde107f0f7",
"text": "Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a nontrivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the wellknown technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.",
"title": ""
},
{
"docid": "7a1f244aae5f28cd9fb2d5ba54113c28",
"text": "Next generation sequencing (NGS) technology has revolutionized genomic and genetic research. The pace of change in this area is rapid with three major new sequencing platforms having been released in 2011: Ion Torrent’s PGM, Pacific Biosciences’ RS and the Illumina MiSeq. Here we compare the results obtained with those platforms to the performance of the Illumina HiSeq, the current market leader. In order to compare these platforms, and get sufficient coverage depth to allow meaningful analysis, we have sequenced a set of 4 microbial genomes with mean GC content ranging from 19.3 to 67.7%. Together, these represent a comprehensive range of genome content. Here we report our analysis of that sequence data in terms of coverage distribution, bias, GC distribution, variant detection and accuracy. Sequence generated by Ion Torrent, MiSeq and Pacific Biosciences technologies displays near perfect coverage behaviour on GC-rich, neutral and moderately AT-rich genomes, but a profound bias was observed upon sequencing the extremely AT-rich genome of Plasmodium falciparum on the PGM, resulting in no coverage for approximately 30% of the genome. We analysed the ability to call variants from each platform and found that we could call slightly more variants from Ion Torrent data compared to MiSeq data, but at the expense of a higher false positive rate. Variant calling from Pacific Biosciences data was possible but higher coverage depth was required. Context specific errors were observed in both PGM and MiSeq data, but not in that from the Pacific Biosciences platform. All three fast turnaround sequencers evaluated here were able to generate usable sequence. However there are key differences between the quality of that data and the applications it will support.",
"title": ""
}
] | scidocsrr |
Subsets and Splits