query_id
stringlengths 32
32
| query
stringlengths 0
35.7k
| positive_passages
listlengths 1
7
| negative_passages
listlengths 22
29
| subset
stringclasses 2
values |
---|---|---|---|---|
6d3e17e4b44a2cadedc8f483ab186cb2 | Add English to image Chinese captioning | [
{
"docid": "210a777341f3557081d43f2580428c32",
"text": "This paper studies the problem of associating images with descriptive sentences by embedding them in a common latent space. We are interested in learning such embeddings from hundreds of thousands or millions of examples. Unfortunately, it is prohibitively expensive to fully annotate this many training images with ground-truth sentences. Instead, we ask whether we can learn better image-sentence embeddings by augmenting small fully annotated training sets with millions of images that have weak and noisy annotations (titles, tags, or descriptions). After investigating several state-of-the-art scalable embedding methods, we introduce a new algorithm called Stacked Auxiliary Embedding that can successfully transfer knowledge from millions of weakly annotated images to improve the accuracy of retrieval-based image description.",
"title": ""
},
{
"docid": "c879ee3945592f2e39bb3306602bb46a",
"text": "This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.",
"title": ""
},
{
"docid": "9eaab923986bf74bdd073f6766ca45b2",
"text": "This paper introduces a novel generation system that composes humanlike descriptions of images from computer vision detections. By leveraging syntactically informed word co-occurrence statistics, the generator filters and constrains the noisy detections output from a vision system to generate syntactic trees that detail what the computer vision system sees. Results show that the generation system outperforms state-of-the-art systems, automatically generating some of the most natural image descriptions to date.",
"title": ""
}
] | [
{
"docid": "b59965c405937a096186e41b2a3877c3",
"text": "The culmination of many years of increasing research into the toxicity of tau aggregation in neurodegenerative disease has led to the consensus that soluble, oligomeric forms of tau are likely the most toxic entities in disease. While tauopathies overlap in the presence of tau pathology, each disease has a unique combination of symptoms and pathological features; however, most study into tau has grouped tau oligomers and studied them as a homogenous population. Established evidence from the prion field combined with the most recent tau and amyloidogenic protein research suggests that tau is a prion-like protein, capable of seeding the spread of pathology throughout the brain. Thus, it is likely that tau may also form prion-like strains or diverse conformational structures that may differ by disease and underlie some of the differences in symptoms and pathology in neurodegenerative tauopathies. The development of techniques and new technology for the detection of tau oligomeric strains may, therefore, lead to more efficacious diagnostic and treatment strategies for neurodegenerative disease. [Formula: see text].",
"title": ""
},
{
"docid": "2827e0d197b7f66c7f6ceb846c6aaa27",
"text": "The food industry is becoming more customer-oriented and needs faster response times to deal with food scandals and incidents. Good traceability systems help to minimize the production and distribution of unsafe or poor quality products, thereby minimizing the potential for bad publicity, liability, and recalls. The current food labelling system cannot guarantee that the food is authentic, good quality and safe. Therefore, traceability is applied as a tool to assist in the assurance of food safety and quality as well as to achieve consumer confidence. This paper presents comprehensive information about traceability with regards to safety and quality in the food supply chain. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e84ca42f96cca0fe3ed7c70d90554a8d",
"text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.",
"title": ""
},
{
"docid": "2c39430076bf63a05cde06fe57a61ff4",
"text": "With the advent of IoT based technologies; the overall industrial sector is amenable to undergo a fundamental and essential change alike to the industrial revolution. Online Monitoring solutions of environmental polluting parameter using Internet Of Things (IoT) techniques help us to gather the parameter values such as pH, temperature, humidity and concentration of carbon monoxide gas, etc. Using sensors and enables to have a keen control on the environmental pollution caused by the industries. This paper introduces a LabVIEW based online pollution monitoring of industries for the control over pollution caused by untreated disposal of waste. This paper proposes the use of an AT-mega 2560 Arduino board which collects the temperature and humidity parameter from the DHT-11 sensor, carbon dioxide concentration using MG-811 and update it into the online database using MYSQL. For monitoring and controlling, a website is designed and hosted which will give a real essence of IoT. To increase the reliability and flexibility an android application is also developed.",
"title": ""
},
{
"docid": "bfb79421ca0ddfd5a584f009f8102a2c",
"text": "In this paper, suppression of cross-polarized (XP) radiation of a circular microstrip patch antenna (CMPA) employing two new geometries of defected ground structures (DGSs), is experimentally investigated. One of the antennas employs a circular ring shaped defect in the ground plane, located bit away from the edge of the patch. This structure provides an improvement of XP level by 5 to 7 dB compared to an identical patch with normal ground plane. The second structure incorporates two arc-shaped DGSs in the H-plane of the patch. This configuration improves the XP radiation by about 7 to 12 dB over and above a normal CMPA. For demonstration of the concept, a set of prototypes have been examined at C-band. The experimental results have been presented.",
"title": ""
},
{
"docid": "7ea3d3002506e0ea6f91f4bdab09c2d5",
"text": "We propose a novel and robust computational framework for automatic detection of deformed 2D wallpaper patterns in real-world images. The theory of 2D crystallographic groups provides a sound and natural correspondence between the underlying lattice of a deformed wallpaper pattern and a degree-4 graphical model. We start the discovery process with unsupervised clustering of interest points and voting for consistent lattice unit proposals. The proposed lattice basis vectors and pattern element contribute to the pairwise compatibility and joint compatibility (observation model) functions in a Markov random field (MRF). Thus, we formulate the 2D lattice detection as a spatial, multitarget tracking problem, solved within an MRF framework using a novel and efficient mean-shift belief propagation (MSBP) method. Iterative detection and growth of the deformed lattice are interleaved with regularized thin-plate spline (TPS) warping, which rectifies the current deformed lattice into a regular one to ensure stability of the MRF model in the next round of lattice recovery. We provide quantitative comparisons of our proposed method with existing algorithms on a diverse set of 261 real-world photos to demonstrate significant advances in accuracy and speed over the state of the art in automatic discovery of regularity in real images.",
"title": ""
},
{
"docid": "8c6c8ab24394ddfde8209cd0dacc9da3",
"text": "The Intelligence in Wikipedia project at the University of Washington is combining self-supervised information extraction (IE) techniques with a mixed initiative interface designed to encourage communal content creation (CCC). Since IE and CCC are each powerful ways to produce large amounts of structured information, they have been studied extensively — but only in isolation. By combining the two methods in a virtuous feedback cycle, we aim for substantial synergy. While previous papers have described the details of individual aspects of our endeavor [25, 26, 24, 13], this report provides an overview of the project’s progress and vision.",
"title": ""
},
{
"docid": "29786d164d0d5e76ea9c098944e27266",
"text": "Future mobile communications systems are likely to be very different to those of today with new service innovations driven by increasing data traffic demand, increasing processing power of smart devices and new innovative applications. To meet these service demands the telecommunications industry is converging on a common set of 5G requirements which includes network speeds as high as 10 Gbps, cell edge rate greater than 100 Mbps, and latency of less than 1 msec. To reach these 5G requirements the industry is looking at new spectrum bands in the range up to 100 GHz where there is spectrum availability for wide bandwidth channels. For the development of new 5G systems to operate in bands up to 100 GHz there is a need for accurate radio propagation models which are not addressed by existing channel models developed for bands below 6 GHz. This paper presents a preliminary overview of the 5G channel models for bands up to 100 GHz in indoor offices and shopping malls, derived from extensive measurements across a multitude of bands. These studies have found some extensibility of the existing 3GPP models (e.g. 3GPP TR36.873) to the higher frequency bands up to 100 GHz. The measurements indicate that the smaller wavelengths introduce an increased sensitivity of the propagation models to the scale of the environment and show some frequency dependence of the path loss as well as increased occurrence of blockage. Further, the penetration loss is highly dependent on the material and tends to increase with frequency. The small-scale characteristics of the channel such as delay spread and angular spread and the multipath richness is somewhat similar over the frequency range, which is encouraging for extending the existing 3GPP models to the wider frequency range. Further work will be carried out to complete these models, but this paper presents the first steps for an initial basis for the model development.",
"title": ""
},
{
"docid": "16f2811b6052a1a9e527d61b2ff6509b",
"text": "Corneal topography is a non-invasive medical imaging techniqueto assess the shape of the cornea in ophthalmology. In this paper we demonstrate that in addition to its health care use, corneal topography could provide valuable biometric measurements for person authentication. To extract a feature vector from these images (topographies), we propose to fit the geometry of the corneal surface with Zernike polynomials, followed by a linear discriminant analysis (LDA) of the Zernike coefficients to select the most discriminating features. The results show that the proposed method reduced the typical d-dimensional Zernike feature vector (d=36) into a much lower r-dimensional feature vector (r=3), and improved the Equal Error Rate from 2.88% to 0.96%, with the added benefit of faster computation time.",
"title": ""
},
{
"docid": "f9cc9e1ddc0d1db56f362a1ef409274d",
"text": "Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates.",
"title": ""
},
{
"docid": "1d724b07c232098e2a5e5af2bb1e7c83",
"text": "[2] Brown SJ, McLean WH. One remarkable molecule: filaggrin. J Invest Dermatol 2012;132:751–62. [3] Sandilands A, Terron-Kwiatkowski A, Hull PR, O’Regan GM, Clayton TH, Watson RM, et al. Comprehensive analysis of the gene encoding filaggrin uncovers prevalent and rare mutations in ichthyosis vulgaris and atopic eczema. Nat Genet 2007;39:650–4. [4] Margolis DJ, Apter AJ, Gupta J, Hoffstad O, Papadopoulos M, Campbell LE, et al. The persistence of atopic dermatitis and Filaggrin mutations in a US longitudinal cohort. J Allergy Clin Immunol 2012;130(4):912–7. [5] Smith FJ, Irvine AD, Terron-Kwiatkowski A, Sandilands A, Campbell LE, Zhao Y, et al. Loss-of-function mutations in the gene encoding filaggrin cause ichthyosis vulgaris. Nat Genet 2006;38:337–42. [6] Paternoster L, Standl M, Chen CM, Ramasamy A, Bonnelykke K, Duijts L, et al. Meta-analysis of genome-wide association studies identifies three new risk Table 1 Reliability and validity comparisons for FLG null mutations as assayed by TaqMan and beadchip methods.",
"title": ""
},
{
"docid": "012f30fbeed17fcfd098e5362bd95ee8",
"text": "We prove that binary orthogonal arrays of strength 8, length 12 and cardinality 1536 do not exist. This implies the nonexistence of arrays of parameters (strength,length,cardinality) = (n, n + 4, 6.2) for every integer n ≥ 8.",
"title": ""
},
{
"docid": "a50b7ab02d2fe934f5fb5bed14fcdad9",
"text": "An empirical study has been conducted investigating the relationship between the performance of an aspect based language model in terms of perplexity and the corresponding information retrieval performance obtained. It is observed, on the corpora considered, that the perplexity of the language model has a systematic relationship with the achievable precision recall performance though it is not statistically significant.",
"title": ""
},
{
"docid": "37a6f3773aebf46cc40266b8bb5692af",
"text": "The theory of myofascial pain syndrome (MPS) caused by trigger points (TrPs) seeks to explain the phenomena of muscle pain and tenderness in the absence of evidence for local nociception. Although it lacks external validity, many practitioners have uncritically accepted the diagnosis of MPS and its system of treatment. Furthermore, rheumatologists have implicated TrPs in the pathogenesis of chronic widespread pain (FM syndrome). We have critically examined the evidence for the existence of myofascial TrPs as putative pathological entities and for the vicious cycles that are said to maintain them. We find that both are inventions that have no scientific basis, whether from experimental approaches that interrogate the suspect tissue or empirical approaches that assess the outcome of treatments predicated on presumed pathology. Therefore, the theory of MPS caused by TrPs has been refuted. This is not to deny the existence of the clinical phenomena themselves, for which scientifically sound and logically plausible explanations based on known neurophysiological phenomena can be advanced.",
"title": ""
},
{
"docid": "60eff31e8f742873cec993f1499385b5",
"text": "There is an increasing interest in employing multiple sensors for surveillance and communications. Some of the motivating factors are reliability, survivability, increase in the number of targets under consideration, and increase in required coverage. Tenney and Sandell have recently treated the Bayesian detection problem with distributed sensors. They did not consider the design of data fusion algorithms. We present an optimum data fusion structure given the detectors. Individual decisions are weighted according to the reliability of the detector and then a threshold comparison is performed to obtain the global decision.",
"title": ""
},
{
"docid": "a9d22e2568bcae7a98af7811546c7853",
"text": "This thesis addresses the challenges of building a software system for general-purpose runtime code manipulation. Modern applications, with dynamically-loaded modules and dynamicallygenerated code, are assembled at runtime. While it was once feasible at compile time to observe and manipulate every instruction — which is critical for program analysis, instrumentation, trace gathering, optimization, and similar tools — it can now only be done at runtime. Existing runtime tools are successful at inserting instrumentation calls, but no general framework has been developed for fine-grained and comprehensive code observation and modification without high overheads. This thesis demonstrates the feasibility of building such a system in software. We present DynamoRIO, a fully-implemented runtime code manipulation system that supports code transformations on any part of a program, while it executes. DynamoRIO uses code caching technology to provide efficient, transparent, and comprehensive manipulation of an unmodified application running on a stock operating system and commodity hardware. DynamoRIO executes large, complex, modern applications with dynamically-loaded, generated, or even modified code. Despite the formidable obstacles inherent in the IA-32 architecture, DynamoRIO provides these capabilities efficiently, with zero to thirty percent time and memory overhead on both Windows and Linux. DynamoRIO exports an interface for building custom runtime code manipulation tools of all types. It has been used by many researchers, with several hundred downloads of our public release, and is being commercialized in a product for protection against remote security exploits, one of numerous applications of runtime code manipulation. Thesis Supervisor: Saman Amarasinghe Title: Associate Professor of Electrical Engineering and Computer Science",
"title": ""
},
{
"docid": "d5b004af32bd747c2b5ad175975f8c06",
"text": "This paper presents a design of a quasi-millimeter wave wideband antenna array consisting of a leaf-shaped bowtie antenna (LSBA) and series-parallel feed networks in which parallel strip and microstrip lines are employed. A 16-element LSBA array is designed such that the antenna array operates over the frequency band of 22-30GHz. In order to demonstrate the effective performance of the presented configuration, characteristics of the designed LSBA array are evaluated by the finite-difference time domain (FDTD) analysis and measurements. Over the frequency range from 22GHz to 30GHz, the simulated reflection coefficient is observed to be less than -8dB, and the actual gain of 12.3-19.4dBi is obtained.",
"title": ""
},
{
"docid": "95037e7dc3ae042d64a4b343ad4efd39",
"text": "We classify human actions occurring in depth image sequences using features based on skeletal joint positions. The action classes are represented by a multi-level Hierarchical Dirichlet Process – Hidden Markov Model (HDP-HMM). The non-parametric HDP-HMM allows the inference of hidden states automatically from training data. The model parameters of each class are formulated as transformations from a shared base distribution, thus promoting the use of unlabelled examples during training and borrowing information across action classes. Further, the parameters are learnt in a discriminative way. We use a normalized gamma process representation of HDP and margin based likelihood functions for this purpose. We sample parameters from the complex posterior distribution induced by our discriminative likelihood function using elliptical slice sampling. Experiments with two different datasets show that action class models learnt using our technique produce good classification results.",
"title": ""
},
{
"docid": "118526b566b800d9dea30d2e4c904feb",
"text": "With the problem of increased web resources and the huge amount of information available, the necessity of having automatic summarization systems appeared. Since summarization is needed the most in the process of searching for information on the web, where the user aims at a certain domain of interest according to his query, in this case domain-based summaries would serve the best. Despite the existence of plenty of research work in the domain-based summarization in English, there is lack of them in Arabic due to the shortage of existing knowledge bases. In this paper we introduce a query based, Arabic text, single document summarization using an existing Arabic language thesaurus and an extracted knowledge base. We use an Arabic corpus to extract domain knowledge represented by topic related concepts/ keywords and the lexical relations among them. The user’s query is expanded once by using the Arabic WordNet thesaurus and then by adding the domain specific knowledge base to the expansion. For the summarization dataset, Essex Arabic Summaries Corpus was used. It has many topic based articles with multiple human summaries. The performance appeared to be enhanced when using our extracted knowledge base than to just use the WordNet.",
"title": ""
},
{
"docid": "3aaffdda034c762ad36954386d796fb9",
"text": "KNTU CDRPM is a cable driven redundant parallel manipulator, which is under investigation for possible high speed and large workspace applications. This newly developed mechanisms have several advantages compared to the conventional parallel mechanisms. Its rotational motion range is relatively large, its redundancy improves safety for failure in cables, and its design is suitable for long-time high acceleration motions. In this paper, collision-free workspace of the manipulator is derived by applying fast geometrical intersection detection method, which can be used for any fully parallel manipulator. Implementation of the algorithm on the Neuron design of the KNTU CDRPM leads to significant results, which introduce a new style of design of a spatial cable-driven parallel manipulators. The results are elaborated in three presentations; constant-orientation workspace, total orientation workspace and orientation workspace.",
"title": ""
}
] | scidocsrr |
c184aa2b1b955610fe4340347cfe7c8a | Botnet Research Survey | [
{
"docid": "b3b27246ed1ef97fb1994b8dbaf023f3",
"text": "Malicious botnets are networks of compromised computers that are controlled remotely to perform large-scale distributed denial-of-service (DDoS) attacks, send spam, trojan and phishing emails, distribute pirated media or conduct other usually illegitimate activities. This paper describes a methodology to detect, track and characterize botnets on a large Tier-1 ISP network. The approach presented here differs from previous attempts to detect botnets by employing scalable non-intrusive algorithms that analyze vast amounts of summary traffic data collected on selected network links. Our botnet analysis is performed mostly on transport layer data and thus does not depend on particular application layer information. Our algorithms produce alerts with information about controllers. Alerts are followed up with analysis of application layer data, that indicates less than 2% false positive rates.",
"title": ""
}
] | [
{
"docid": "270319820586068f09954ec9c358232f",
"text": "Recent years have seen exciting developments in join algorithms. In 2008, Atserias, Grohe and Marx (henceforth AGM) proved a tight bound on the maximum result size of a full conjunctive query, given constraints on the input rel ation sizes. In 2012, Ngo, Porat, R «e and Rudra (henceforth NPRR) devised a join algorithm with worst-case running time proportional to the AGM bound [8]. Our commercial database system LogicBlox employs a novel join algorithm, leapfrog triejoin, which compared conspicuously well to the NPRR algorithm in preliminary benchmarks. This spurred us to analyze the complexity of leapfrog triejoin. In this pa per we establish that leapfrog triejoin is also worst-case o ptimal, up to a log factor, in the sense of NPRR. We improve on the results of NPRR by proving that leapfrog triejoin achieves worst-case optimality for finer-grained classes o f database instances, such as those defined by constraints on projection cardinalities. We show that NPRR is not worstcase optimal for such classes, giving a counterexamplewher e leapfrog triejoin runs inO(n log n) time and NPRR runs in Θ(n) time. On a practical note, leapfrog triejoin can be implemented using conventional data structures such as B-trees, and extends naturally to ∃1 queries. We believe our algorithm offers a useful addition to the existing toolbox o f join algorithms, being easy to absorb, simple to implement, and having a concise optimality proof.",
"title": ""
},
{
"docid": "636076c522ea4ac91afbdc93d58fa287",
"text": "Aspect-based opinion mining has attracted lots of attention today. In this thesis, we address the problem of product aspect rating prediction, where we would like to extract the product aspects, and predict aspect ratings simultaneously. Topic models have been widely adapted to jointly model aspects and sentiments, but existing models may not do the prediction task well due to their weakness in sentiment extraction. The sentiment topics usually do not have clear correspondence to commonly used ratings, and the model may fail to extract certain kinds of sentiments due to skewed data. To tackle this problem, we propose a sentiment-aligned topic model(SATM), where we incorporate two types of external knowledge: product-level overall rating distribution and word-level sentiment lexicon. Experiments on real dataset demonstrate that SATM is effective on product aspect rating prediction, and it achieves better performance compared to the existing approaches.",
"title": ""
},
{
"docid": "43e8f35e57149d1441d8e75fa754549d",
"text": "Software teams should follow a well defined goal and keep their work focused. Work fragmentation is bad for efficiency and quality. In this paper we empirically investigate the relationship between the fragmentation of developer contributions and the number of post-release failures. Our approach is to represent developer contributions with a developer-module network that we call contribution network. We use network centrality measures to measure the degree of fragmentation of developer contributions. Fragmentation is determined by the centrality of software modules in the contribution network. Our claim is that central software modules are more likely to be failure-prone than modules located in surrounding areas of the network. We analyze this hypothesis by exploring the network centrality of Microsoft Windows Vista binaries using several network centrality measures as well as linear and logistic regression analysis. In particular, we investigate which centrality measures are significant to predict the probability and number of post-release failures. Results of our experiments show that central modules are more failure-prone than modules located in surrounding areas of the network. Results further confirm that number of authors and number of commits are significant predictors for the probability of post-release failures. For predicting the number of post-release failures the closeness centrality measure is most significant.",
"title": ""
},
{
"docid": "50283f1442d6e50ac6f8334ab992cbc6",
"text": "The objective of ent i ty identification i s t o determine the correspondence between object instances f r o m more than one database. This paper ezamines the problem at the instance level assuming that schema level heterogeneity has been resolved a priori . Soundness and completeness are defined as the desired properties of any ent i ty identification technique. To achieve soundness, a set of ident i ty and distinctness rules are established for enti t ies in the integrated world. W e propose the use of eztended key, which i s the union of keys (and possibly other attributes) f r o m the relations t o be matched, and i t s corresponding ident i ty rule, t o determine the equivalence between tuples f r o m relations which m a y not share any common key. Instance level funct ional dependencies (ILFD), a f o r m of semantic constraint information about the real-world entities, are used t o derive the missing eztended key attribute values of a tuple.",
"title": ""
},
{
"docid": "8b3ad3d48da22c529e65c26447265372",
"text": "It is demonstrated that neural networks can be used effectively for the identification and control of nonlinear dynamical systems. The emphasis is on models for both identification and control. Static and dynamic backpropagation methods for the adjustment of parameters are discussed. In the models that are introduced, multilayer and recurrent networks are interconnected in novel configurations, and hence there is a real need to study them in a unified fashion. Simulation results reveal that the identification and adaptive control schemes suggested are practically feasible. Basic concepts and definitions are introduced throughout, and theoretical questions that have to be addressed are also described.",
"title": ""
},
{
"docid": "d83aa51df8fa3cc03e3ee8d5ed01851e",
"text": "Because the World Wide Web consists primarily of text, information extraction is central to any e ort that would use the Web as a resource for knowledge discovery. We show how information extraction can be cast as a standard machine learning problem, and argue for the suitability of relational learning in solving it. The implementation of a general-purpose relational learner for information extraction, SRV, is described. In contrast with earlier learning systems for information extraction, SRV makes no assumptions about document structure and the kinds of information available for use in learning extraction patterns. Instead, structural and other information is supplied as input in the form of an extensible token-oriented feature set. We demonstrate the e ectiveness of this approach by adapting SRV for use in learning extraction rules for a domain consisting of university course and research project pages sampled from the Web. Making SRV Web-ready only involves adding several simple HTML-speci c features to its basic feature set. The World Wide Web, with its explosive growth and ever-broadening reach, is swiftly becoming the default knowledge resource for many areas of endeavor. Unfortunately, although any one of over 200,000,000 Web pages is readily accessible to an Internet-connected workstation, the information content of these pages is, without human interpretation, largely inaccessible. Systems have been developed which can make sense of highly regularWeb pages, such as those generated automatically from internal databases in response to user queries (Doorenbos, Etzioni, & Weld 1997) (Kushmerick 1997). A surprising number of Web sites have pages amenable to the techniques used by these systems. Still, most Web pages do not exhibit the regularity required by they require. There is a larger class of pages, however, which are regular in a more abstract sense. ManyWeb pages come from collections in which each page describes a single entity or event (e.g., home pages in a CS department; each describes its owner). The purpose of such a page is often to convey essential facts about the entity it Copyright c 1998, American Association for Arti cial Intelligence (www.aaai.org). All rights reserved. describes. It is often reasonable to approach such a page with a set of standard questions, and to expect that the answers to these questions will be available as succinct text fragments in the page. A home page, for example, frequently lists the owner's name, a liations, email address, etc. The problem of identifying the text fragments that answer standard questions de ned for a document collection is called information extraction (IE) (Def 1995). Our interest in IE concerns the development of machine learning methods to solve it. We regard IE as a kind of text classi cation, which has strong a nities with the well-investigated problem of document classi cation, but also presents unique challenges. We share this focus with a number of other recent systems (Soderland 1996) (Cali & Mooney 1997), including a system designed to learn how to extract from HTML (Soderland 1997). In this paper we describe SRV, a top-down relational algorithm for information extraction. Central to the design of SRV is its reliance on a set of token-oriented features, which are easy to implement and add to the system. Since domain-speci c information is contained within this features, which are separate from the core algorithm, SRV is better poised than similar systems for targeting to new domains. We have used it to perform extraction from electronic seminar announcements, medical abstracts, and newswire articles on corporate acquisitions. The experiments reported here show that targeting the system to HTML involves nothing more than the addition of HTML-speci c features to its basic feature set. Learning for Information Extraction Consider a collection of Web pages describing university computer science courses. Given a page, a likely task for an information extraction system is to nd the title of the course the page describes. We call the title a eld and any literal title taken from an actual page, such as \\Introduction to Arti cial Intelligence,\" an instantiation or instance of the title eld. Note that the typical information extraction problem involves multiple elds, some of which may have multiple instantiations in a given le. For example, a course page might From: AAAI-98 Proceedings. Copyright © 1998, AAAI (www.aaai.org). All rights reserved.",
"title": ""
},
{
"docid": "d07a10da23e0fc18b473f8a30adaebfb",
"text": "DATA FLOW IS A POPULAR COMPUTATIONAL MODEL for visual programming languages. Data flow provides a view of computation which shows the data flowing from one filter function to another, being transformed as it goes. In addition, the data flow model easily accomodates the insertion of viewing monitors at various points to show the data to the user. Consequently, many recent visual programming languages are based on the data flow model. This paper describes many of the data flow visual programming languages. The languages are grouped according to their application domain. For each language, pertinent aspects of its appearance, and the particular design alternatives it uses, are discussed. Next, some strengths of data flow visual programming languages are mentioned. Finally, unsolved problems in the design of such languages are discussed.",
"title": ""
},
{
"docid": "1db72cafa214f41b5b6faa3a3c0c8be0",
"text": "Multiple-antenna receivers offer numerous advantages over single-antenna receivers, including sensitivity improvement, ability to reject interferers spatially and enhancement of data-rate or link reliability via MIMO. In the recent past, RF/analog phased-array receivers have been investigated [1-4]. On the other hand, digital beamforming offers far greater flexibility, including ability to form multiple simultaneous beams, ease of digital array calibration and support for MIMO. However, ADC dynamic range is challenged due to the absence of spatial interference rejection at RF/analog.",
"title": ""
},
{
"docid": "3ebc26643334c88ccc44fb01f60d600f",
"text": "Skin whitening products are commercially available for cosmetic purposes in order to obtain a lighter skin appearance. They are also utilized for clinical treatment of pigmentary disorders such as melasma or postinflammatory hyperpigmentation. Whitening agents act at various levels of melanin production in the skin. Many of them are known as competitive inhibitors of tyrosinase, the key enzyme in melanogenesis. Others inhibit the maturation of this enzyme or the transport of pigment granules (melanosomes) from melanocytes to surrounding keratinocytes. In this review we present an overview of (natural) whitening products that may decrease skin pigmentation by their interference with the pigmentary processes.",
"title": ""
},
{
"docid": "ac94c03a72607f76e53ae0143349fff3",
"text": "Abrlracr-A h u l a for the cppecity et arbitrary sbgle-wer chrurwla without feedback (mot neccgdueily Wium\" stable, stationary, etc.) is proved. Capacity ie shown to e i p l the supremum, over all input processts, & the input-outpat infiqjknda QBnd as the llnainl ia praabiutJr d the normalized information density. The key to thir zbllljt is a ntw a\"c sppmrh bosed 811 a Ampie II(A Lenar trwrd eu the pralwbility of m-4v hgpothesb t#tcl UIOlls eq*rdIaN <hypotheses. A neassruy and d c i e n t coadition Eor the validity of the strong comeme is given, as well as g\"l expressions for eeapacity.",
"title": ""
},
{
"docid": "666d52dd68c088f7274a3789f8b78b78",
"text": "Intermediate and higher vision processes require selection of a subset of the available sensory information before further processing. Usually, this selection is implemented in the form of a spatially circumscribed region of the visual field, the so-called \"focus of attention\" which scans the visual scene dependent on the input and on the attentional state of the subject. We here present a model for the control of the focus of attention in primates, based on a saliency map. This mechanism is not only expected to model the functionality of biological vision but also to be essential for the understanding of complex scenes in machine vision.",
"title": ""
},
{
"docid": "5944791613da6b94a09560dbf8f54c38",
"text": "In this paper we introduce a friction observer for robots with joint torque sensing (in particular for the DLR medical robot) in order to increase the positioning accuracy and the performance of torque control. The observer output corresponds to the low-pass filtered friction torque. It is used for friction compensation in conjunction with a MIMO controller designed for flexible joint arms. A passivity analysis is done for this friction compensation, allowing a Lyapunov based convergence analysis in the context of the nonlinear robot dynamics. For the complete controlled system, global asymptotic stability can be shown. Experimental results validate the practical efficiency of the approach.",
"title": ""
},
{
"docid": "61126d2dc5dd6e8130dd0d6a0dc45774",
"text": "Over the last decade or so, it has become increasingly clear to many cognitive scientists that research into human language (and cognition in general, for that matter) has largely neglected how language and thought are embedded in the body and the world. As argued by, for instance, Clark (1997), cognition is fundamentally embodied, that is, it can only be studied in relation to human action, perception, thought, and experience. As Feldman puts it: \" Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptations that enable creatures like us to thrive in a wide range of situations \" (p. 7). Although it may seem paradoxical to try formalizing this view in a computational theory of language comprehension, this is exactly what From Molecule to Metaphor does. Starting from the assumption that human thought is neural computation, Feldman develops a computational theory that takes the embodied nature of language into account: the neural theory of language. The book comprises 27 short chapters, distributed over nine parts. Part I presents the basic ideas behind embodied language and cognition and explains how the embodiment of language is apparent in the brain: The neural circuits involved in a particular experience or action are, for a large part, the same circuits involved in processing language about this experience or action. Part II discusses neural computation, starting from the molecules that take part in information processing by neurons. This detailed exposition is followed by a description of neuronal networks in the human body, in particular in the brain. The description of the neural theory of language begins in Part III, where it is explained how localist neural networks, often used as psycholinguistic models, can represent the meaning of concepts. This is done by introducing triangle nodes into the network. Each triangle node connects the nodes representing a concept, a role, and a filler—for example, \" pea, \" \" has-color, \" and \" green. \" Such networks are trained by a process called recruitment learning, which is described only very informally. This is certainly an interesting idea for combining propositional and connectionist models, but it does leave the reader with a number of questions. For instance, how is the concept distinguished from the filler when they can be interchanged, as …",
"title": ""
},
{
"docid": "2172e78731ee63be5c15549e38c4babb",
"text": "The design of a low-cost low-power ring oscillator-based truly random number generator (TRNG) macrocell, which is suitable to be integrated in smart cards, is presented. The oscillator sampling technique is exploited, and a tetrahedral oscillator with large jitter has been employed to realize the TRNG. Techniques to improve the statistical quality of the ring oscillatorbased TRNGs' bit sequences have been presented and verified by simulation and measurement. A postdigital processor is added to further enhance the randomness of the output bits. Fabricated in the HHNEC 0.13-μm standard CMOS process, the proposed TRNG has an area as low as 0.005 mm2. Powered by a single 1.8-V supply voltage, the TRNG has a power consumption of 40 μW. The bit rate of the TRNG after postprocessing is 100 kb/s. The proposed TRNG has been made into an IP and successfully applied in an SD card for encryption application. The proposed TRNG has passed the National Institute of Standards and Technology tests and Diehard tests.",
"title": ""
},
{
"docid": "89a73876c24508d92050f2055292d641",
"text": "We study the fundamental problem of computing distances between nodes in large graphs such as the web graph and social networks. Our objective is to be able to answer distance queries between pairs of nodes in real time. Since the standard shortest path algorithms are expensive, our approach moves the time-consuming shortest-path computation offline, and at query time only looks up precomputed values and performs simple and fast computations on these precomputed values. More specifically, during the offline phase we compute and store a small \"sketch\" for each node in the graph, and at query-time we look up the sketches of the source and destination nodes and perform a simple computation using these two sketches to estimate the distance.",
"title": ""
},
{
"docid": "7e7ba0025d19a0eb73c22ceb1eaddcee",
"text": "This is a landmark book. For anyone interested in language, in dictionaries and thesauri, or natural language processing, the introduction, Chapters 14, and Chapter 16 are must reading. (Select other chapters according to your special interests; see the chapter-by-chapter review). These chapters provide a thorough introduction to the preeminent electronic lexical database of today in terms of accessibility and usage in a wide range of applications. But what does that have to do with digital libraries? Natural language processing is essential for dealing efficiently with the large quantities of text now available online: fact extraction and summarization, automated indexing and text categorization, and machine translation. Another essential function is helping the user with query formulation through synonym relationships between words and hierarchical and other relationships between concepts. WordNet supports both of these functions and thus deserves careful study by the digital library community.",
"title": ""
},
{
"docid": "d50d3997572847200f12d69f61224760",
"text": "The main function of a network layer is to route packets from the source machine to the destination machine. Algorithms that are used for route selection and data structure are the main parts for the network layer. In this paper we examine the network performance when using three routing protocols, RIP, OSPF and EIGRP. Video, HTTP and Voice application where configured for network transfer. We also examine the behaviour when using link failure/recovery controller between network nodes. The simulation results are analyzed, with a comparison between these protocols on the effectiveness and performance in network implemented.",
"title": ""
},
{
"docid": "bba4d637cf40e81ea89e61e875d3c425",
"text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.",
"title": ""
},
{
"docid": "7a1a9ed8e9a6206c3eaf20da0c156c14",
"text": "Formal modeling rules can be used to ensure that an enterprise architecture is correct. Despite their apparent utility and despite mature tool support, formal modelling rules are rarely, if ever, used in practice in enterprise architecture in industry. In this paper we propose a rule authoring method that we believe aligns with actual modelling practice, at least as witnessed in enterprise architecture projects at the Swedish Defence Materiel Administration. The proposed method follows the business rules approach: the rules are specified in a (controlled) natural language which makes them accessible to all stakeholders and easy to modify as the meta-model matures and evolves over time. The method was put to test during 2014 in two large scale enterprise architecture projects, and we report on the experiences from that. To the best of our knowledge, this is the first time extensive formal modelling rules for enterprise architecture has been tested in industry and reported in the",
"title": ""
},
{
"docid": "88dd795c6d1fa37c13fbf086c0eb0e37",
"text": "We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ℓ1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.",
"title": ""
}
] | scidocsrr |
5cd0be106ac0782e02e2f3d5c5653f28 | Beyond Trending Topics: Real-World Event Identification on Twitter | [
{
"docid": "b134824f6c135a331e503b77d17380c0",
"text": "Social media sites (e.g., Flickr, YouTube, and Facebook) are a popular distribution outlet for users looking to share their experiences and interests on the Web. These sites host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events of different type and scale. By automatically identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can enable event browsing and search in state-of-the-art search engines. To address this problem, we exploit the rich \"context\" associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). Using this rich context, which includes both textual and non-textual features, we can define appropriate document similarity metrics to enable online clustering of media to events. As a key contribution of this paper, we explore a variety of techniques for learning multi-feature similarity metrics for social media documents in a principled manner. We evaluate our techniques on large-scale, real-world datasets of event images from Flickr. Our evaluation results suggest that our approach identifies events, and their associated social media documents, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "3e63c8a5499966f30bd3e6b73494ff82",
"text": "Events can be understood in terms of their temporal structure. The authors first draw on several bodies of research to construct an analysis of how people use event structure in perception, understanding, planning, and action. Philosophy provides a grounding for the basic units of events and actions. Perceptual psychology provides an analogy to object perception: Like objects, events belong to categories, and, like objects, events have parts. These relationships generate 2 hierarchical organizations for events: taxonomies and partonomies. Event partonomies have been studied by looking at how people segment activity as it happens. Structured representations of events can relate partonomy to goal relationships and causal structure; such representations have been shown to drive narrative comprehension, memory, and planning. Computational models provide insight into how mental representations might be organized and transformed. These different approaches to event structure converge on an explanation of how multiple sources of information interact in event perception and conception.",
"title": ""
}
] | [
{
"docid": "83ad15e2ffeebb21705b617646dc4ed7",
"text": "As Twitter becomes a more common means for officials to communicate with their constituents, it becomes more important that we understand how officials use these communication tools. Using data from 380 members of Congress' Twitter activity during the winter of 2012, we find that officials frequently use Twitter to advertise their political positions and to provide information but rarely to request political action from their constituents or to recognize the good work of others. We highlight a number of differences in communication frequency between men and women, Senators and Representatives, Republicans and Democrats. We provide groundwork for future research examining the behavior of public officials online and testing the predictive power of officials' social media behavior.",
"title": ""
},
{
"docid": "405cd35764b8ae0b380e85a58a9714bf",
"text": "This work is aimed at modeling, designing and developing an egg incubator system that is able to incubate various types of egg within the temperature range of 35 – 40 0 C. This system uses temperature and humidity sensors that can measure the condition of the incubator and automatically change to the suitable condition for the egg. Extreme variations in incubation temperature affect the embryo and ultimately, post hatch performance. In this work, electric bulbs were used to give the suitable temperature to the egg whereas water and controlling fan were used to ensure that humidity and ventilation were in good condition. LCD is used to display status condition of the incubator and an interface (Keypad) is provided to key in the appropriate temperature range for the egg. To ensure that all part of the eggs was heated by the lamp, DC motor was used to rotate iron rod at the bottom side and automatically change position of the egg. The entire element is controlled using AT89C52 Microcontroller. The temperature of the incubator is maintained at the normal temperature using PID controller implemented in microcontroller. Mathematical model of the incubator, actuator and PID controller were developed. Controller design based on the models was developed using Matlab Simulink. The models were validated through simulation and the Zeigler-Nichol tuning method was adopted as the tuning technique for varying the temperature control parameters of the PID controller in order to achieve a desirable transient response of the system when subjected to a unit step input. After several assumptions and simulations, a set of optimal parameters were obtained at the result of the third test that exhibited a commendable improvement in the overshoot, rise time, peak time and settling time thus improving the robustness and stability of the system. Keyword: Egg Incubator System, AT89C52 Microcontroller, PID Controller, Temperature Sensor.",
"title": ""
},
{
"docid": "f4859226e52f7c9d2b2dc4ac8a0255de",
"text": "Imbalanced data learning is one of the challenging problems in data mining; among this matter, founding the right model assessment measures is almost a primary research issue. Skewed class distribution causes a misreading of common evaluation measures as well it lead a biased classification. This article presents a set of alternative for imbalanced data learning assessment, using a combined measures (G-means, likelihood ratios, Discriminant power, F-Measure Balanced Accuracy, Youden index, Matthews correlation coefficient), and graphical performance assessment (ROC curve, Area Under Curve, Partial AUC, Weighted AUC, Cumulative Gains Curve and lift chart, Area Under Lift AUL), that aim to provide a more credible evaluation. We analyze the applications of these measures in churn prediction models evaluation, a well known application of imbalanced data",
"title": ""
},
{
"docid": "6f304f0dd414a1ed61ecca15dd3bc924",
"text": "Given a matrix A ∈ R, we present a simple, element-wise sparsification algorithm that zeroes out all sufficiently small elements of A and then retains some of the remaining elements with probabilities proportional to the square of their magnitudes. We analyze the approximation accuracy of the proposed algorithm using a recent, elegant non-commutative Bernstein inequality, and compare our bounds with all existing (to the best of our knowledge) elementwise matrix sparsification algorithms.",
"title": ""
},
{
"docid": "db70302a3d7e7e7e5974dd013e587b12",
"text": "In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we propose the SIPHON architecture---a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called \\emph{wormholes} distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, five physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50 000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware.",
"title": ""
},
{
"docid": "68fe4f62d48270395ca3f257bbf8a18a",
"text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.",
"title": ""
},
{
"docid": "8fac18c1285875aee8e7a366555a4ca3",
"text": "Automatic speech recognition (ASR) has been under the scrutiny of researchers for many years. Speech Recognition System is the ability to listen what we speak, interpreter and perform actions according to spoken information. After so many detailed study and optimization of ASR and various techniques of features extraction, accuracy of the system is still a big challenge. The selection of feature extraction techniques is completely based on the area of study. In this paper, a detailed theory about features extraction techniques like LPC and LPCC is examined. The goal of this paper is to study the comparative analysis of features extraction techniques like LPC and LPCC.",
"title": ""
},
{
"docid": "98f814584c555baa05a1292e7e14f45a",
"text": "This paper presents two types of dual band (2.4 and 5.8 GHz) wearable planar dipole antennas, one printed on a conventional substrate and the other on a two-dimensional metamaterial surface (Electromagnetic Bandgap (EBG) structure). The operation of both antennas is investigated and compared under different bending conditions (in E and H-planes) around human arm and leg of different radii. A dual band, Electromagnetic Band Gap (EBG) structure on a wearable substrate is used as a high impedance surface to control the Specific Absorption Rate (SAR) as well as to improve the antenna gain up to 4.45 dBi. The EBG inspired antenna has reduced the SAR effects on human body to a safe level (< 2W/Kg). I.e. the SAR is reduced by 83.3% for lower band and 92.8% for higher band as compared to the conventional antenna. The proposed antenna can be used for wearable applications with least health hazard to human body in Industrial, Scientific and Medical (ISM) band (2.4 GHz, 5.2 GHz) applications. The antennas on human body are simulated and analyzed in CST Microwave Studio (CST MWS).",
"title": ""
},
{
"docid": "f8435db6c6ea75944d1c6b521e0f3dd3",
"text": "We present the design, fabrication process, and characterization of a multimodal tactile sensor made of polymer materials and metal thin film sensors. The multimodal sensor can detect the hardness, thermal conductivity, temperature, and surface contour of a contact object for comprehensive evaluation of contact objects and events. Polymer materials reduce the cost and the fabrication complexity for the sensor skin, while increasing mechanical flexibility and robustness. Experimental tests show the skin is able to differentiate between objects using measured properties. © 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "266f89564a34239cf419ed9e83a2c988",
"text": "The potential of high-resolution IKONOS and QuickBird satellite imagery for mapping and analysis of land and water resources at local scales in Minnesota is assessed in a series of three applications. The applications and accuracies evaluated include: (1) classification of lake water clarity (r = 0.89), (2) mapping of urban impervious surface area (r = 0.98), and (3) aquatic vegetation surveys of emergent and submergent plant groups (80% accuracy). There were several notable findings from these applications. For example, modeling and estimation approaches developed for Landsat TM data for continuous variables such as lake water clarity and impervious surface area can be applied to high-resolution satellite data. The rapid delivery of spatial data can be coupled with current GPS and field computer technologies to bring the imagery into the field for cover type validation. We also found several limitations in working with this data type. For example, shadows can influence feature classification and their effects need to be evaluated. Nevertheless, high-resolution satellite data has excellent potential to extend satellite remote sensing beyond what has been possible with aerial photography and Landsat data, and should be of interest to resource managers as a way to create timely and reliable assessments of land and water resources at a local scale. D 2003 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "6fca80896fe3493072a1bc360cd680a7",
"text": "The physical formats used to represent linguistic data and its annotations have evolved over the past four decades, accommodating different needs and perspectives as well as incorporating advances in data representation generally. This chapter provides an overview of representation formats with the aim of surveying the relevant issues for representing different data types together with current stateof-the-art solutions, in order to provide sufficient information to guide others in the choice of a representation format or formats.",
"title": ""
},
{
"docid": "db9ab8624cdf9b6fdfc91a5d72b76694",
"text": "In this paper, a low profile LLC resonant converter with two transformers using a planar core is proposed for a slim switching mode power supply (SMPS). Design procedures, magnetic modeling and voltage gain characteristics on the proposed planar transformer and converter are described in detail. LLC resonant converter including two transformers using a planar core is connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter is designed and tested.",
"title": ""
},
{
"docid": "77985effa998d08e75eaa117e07fc7a9",
"text": "After two successful years of Event Nugget evaluation in the TAC KBP workshop, the third Event Nugget evaluation track for Knowledge Base Population(KBP) still attracts a lot of attention from the field. In addition to the traditional event nugget and coreference tasks, we introduce a new event sequencing task in English. The new task has brought more complex event relation reasoning to the current evaluations. In this paper we try to provide an overview on the task definition, data annotation, evaluation and trending research methods. We further discuss our efforts in creating the new event sequencing task and interesting research problems related to it.",
"title": ""
},
{
"docid": "748d71e6832288cd0120400d6069bf50",
"text": "This paper introduces the matrix formalism of optics as a useful approach to the area of “light fields”. It is capable of reproducing old results in Integral Photography, as well as generating new ones. Furthermore, we point out the equivalence between radiance density in optical phase space and the light field. We also show that linear transforms in matrix optics are applicable to light field rendering, and we extend them to affine transforms, which are of special importance to designing integral view cameras. Our main goal is to provide solutions to the problem of capturing the 4D light field with a 2D image sensor. From this perspective we present a unified affine optics view on all existing integral / light field cameras. Using this framework, different camera designs can be produced. Three new cameras are proposed. Figure 1: Integral view of a seagull",
"title": ""
},
{
"docid": "b44f24b54e45974421f799527391a9db",
"text": "Dengue fever is a noncontagious infectious disease caused by dengue virus (DENV). DENV belongs to the family Flaviviridae, genus Flavivirus, and is classified into four antigenically distinct serotypes: DENV-1, DENV-2, DENV-3, and DENV-4. The number of nations and people affected has increased steadily and today is considered the most widely spread arbovirus (arthropod-borne viral disease) in the world. The absence of an appropriate animal model for studying the disease has hindered the understanding of dengue pathogenesis. In our study, we have found that immunocompetent C57BL/6 mice infected intraperitoneally with DENV-1 presented some signs of dengue disease such as thrombocytopenia, spleen hemorrhage, liver damage, and increase in production of IFNγ and TNFα cytokines. Moreover, the animals became viremic and the virus was detected in several organs by real-time RT-PCR. Thus, this animal model could be used to study mechanism of dengue virus infection, to test antiviral drugs, as well as to evaluate candidate vaccines.",
"title": ""
},
{
"docid": "f35e22d5ee51d8e83836337b3ab51754",
"text": "SaaS companies generate revenues by charging recurring subscription fees for using their software services. The fast growth of SaaS companies is usually accompanied with huge upfront costs in marketing expenses targeted at their potential customers. Customer retention is a critical issue for SaaS companies because it takes twelve months on average to break-even with the expenses for a single customer. This study describes a methodology for helping SaaS companies manage their customer relationships. We investigated the time-dependent software feature usage data, for example, login numbers and comment numbers, to predict whether a customer would churn within the next three months. Our study compared model performance across four classification algorithms. The XGBoost model yielded the best results for identifying the most important software usage features and for classifying customers as either churn type or non-risky type. Our model achieved a 10-fold cross-validated mean AUC score of 0.7941. Companies can choose to move along the ROC curve to accommodate to their marketing capability. The feature importance output from the XGBoost model can facilitate SaaS companies in identifying the most significant software features to launch more effective marketing campaigns when facing prospective customers.",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
},
{
"docid": "172567417be706a47c94d35d90c24400",
"text": "This work presents a novel semi-supervised learning approach for data-driven modeling of asset failures when health status is only partially known in historical data. We combine a generative model parameterized by deep neural networks with non-linear embedding technique. It allows us to build prognostic models with the limited amount of health status information for the precise prediction of future asset reliability. The proposed method is evaluated on a publicly available dataset for remaining useful life (RUL) estimation, which shows significant improvement even when a fraction of the data with known health status is as sparse as 1% of the total. Our study suggests that the non-linear embedding based on a deep generative model can efficiently regularize a complex model with deep architectures while achieving high prediction accuracy that is far less sensitive to the availability of health status information.",
"title": ""
},
{
"docid": "0ccbc904dd7623c9ef537e41ac888dd0",
"text": "Big Data architectures allow to flexibly store and process heterogeneous data, from multiple sources, in its original format. The structure of those data, commonly supplied by means of REST APIs, is continuously evolving, forcing data analysts using it need to adapt their analytical processes after each release. This gets more challenging when aiming to perform an integrated or historical analysis of multiple sources. To cope with such complexity, in this paper we present the Big Data Integration ontology, the core construct for a data governance protocol that systematically annotates and integrates data from multiple sources in its original format. To cope with syntactic evolution in the sources, we present an algorithm that semi-automatically adapts the ontology upon new releases. A functional evaluation on realworld APIs is performed in order to validate our approach.",
"title": ""
},
{
"docid": "1e5ebd122bee855d7e8113d5fe71202d",
"text": "We derive the general expression of the anisotropic magnetoresistance (AMR) ratio of ferromagnets for a relative angle between the magnetization direction and the current direction. We here use the two-current model for a system consisting of a spin-polarized conduction state (s) and localized d states (d) with spin-orbit interaction. Using the expression, we analyze the AMR ratios of Ni and a half-metallic ferromagnet. These results correspond well to the respective experimental results. In addition, we give an intuitive explanation about a relation between the sign of the AMR ratio and the s-d scattering process. Introduction The anisotropic magnetoresistance (AMR) effect, in which the electrical resistivity depends on a relative angle θ between the magnetization (Mex) direction and the electric current (I) direction, has been studied extensively both experimentally [1-5] and theoretically [1,6]. The AMR ratio is often defined by ( ) ( ) ρ θ ρ θ ρ ρ ρ ⊥",
"title": ""
}
] | scidocsrr |
db86988618b0f2e30c4f824784eba8ff | A phase space model of Fourier ptychographic microscopy. | [
{
"docid": "0cce6366df945f079dbb0b90d79b790e",
"text": "Fourier ptychographic microscopy (FPM) is a recently developed imaging modality that uses angularly varying illumination to extend a system's performance beyond the limit defined by its optical components. The FPM technique applies a novel phase-retrieval procedure to achieve resolution enhancement and complex image recovery. In this Letter, we compare FPM data to theoretical prediction and phase-shifting digital holography measurement to show that its acquired phase maps are quantitative and artifact-free. We additionally explore the relationship between the achievable spatial and optical thickness resolution offered by a reconstructed FPM phase image. We conclude by demonstrating enhanced visualization and the collection of otherwise unobservable sample information using FPM's quantitative phase.",
"title": ""
}
] | [
{
"docid": "6d728174d576ac785ff093f4cdc16e1b",
"text": "The stress-inducible protein heme oxygenase-1 provides protection against oxidative stress. The anti-inflammatory properties of heme oxygenase-1 may serve as a basis for this cytoprotection. We demonstrate here that carbon monoxide, a by-product of heme catabolism by heme oxygenase, mediates potent anti-inflammatory effects. Both in vivo and in vitro, carbon monoxide at low concentrations differentially and selectively inhibited the expression of lipopolysaccharide-induced pro-inflammatory cytokines tumor necrosis factor-α, interleukin-1β, and macrophage inflammatory protein-1β and increased the lipopolysaccharide-induced expression of the anti-inflammatory cytokine interleukin-10. Carbon monoxide mediated these anti-inflammatory effects not through a guanylyl cyclase–cGMP or nitric oxide pathway, but instead through a pathway involving the mitogen-activated protein kinases. These data indicate the possibility that carbon monoxide may have an important protective function in inflammatory disease states and thus has potential therapeutic uses.",
"title": ""
},
{
"docid": "b06a3c929a934633e174bfe1adab21f1",
"text": "In this paper, we analyze the radio channel characteristics at mmWave frequencies for 5G cellular communications in urban scenarios. 3D-ray tracing simulations in the downtown areas of Ottawa and Chicago are conducted in both the 2 GHz and 28 GHz bands. Each area has two different deployment scenarios, with different transmitter height and different density of buildings. Based on the observations of the ray-tracing experiments, important parameters of the radio channel model, such as path loss exponent, shadowing variance, delay spread and angle spread, are provided, forming the basis of a mmWave channel model. Based on the analysis and the 3GPP 3D-Spatial Channel Model (SCM) framework, we introduce a a preliminary mmWave channel model at 28 GHz.",
"title": ""
},
{
"docid": "89b17ff10887b84270c1d627231a0721",
"text": "A novel robust adaptive beamforming method for conformal array is proposed. By using interpolation technique, the cylindrical conformal array with directional antenna elements is transformed to a virtual uniform linear array with omni-directional elements. This method can compensate the amplitude and mutual coupling errors as well as desired signal point errors of the conformal array efficiently. It is a universal method and can be applied to other curved conformal arrays. After the transformation, most of the existing adaptive beamforming algorithms can be applied to conformal array directly. The efficiency of the proposed scheme is assessed through numerical simulations.",
"title": ""
},
{
"docid": "1389323613225897330d250e9349867b",
"text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .",
"title": ""
},
{
"docid": "554d0255aef7ffac9e923da5d93b97e3",
"text": "In this demo paper, we present a text simplification approach that is directed at improving the performance of state-of-the-art Open Relation Extraction (RE) systems. As syntactically complex sentences often pose a challenge for current Open RE approaches, we have developed a simplification framework that performs a pre-processing step by taking a single sentence as input and using a set of syntactic-based transformation rules to create a textual input that is easier to process for subsequently applied Open RE systems.",
"title": ""
},
{
"docid": "b623437391b298c2e618b0f42d3e19a9",
"text": "In the era of the Social Web, crowdfunding has become an increasingly more important channel for entrepreneurs to raise funds from the crowd to support their startup projects. Previous studies examined various factors such as project goals, project durations, and categories of projects that might influence the outcomes of the fund raising campaigns. However, textual information of projects has rarely been studied for analyzing crowdfunding successes. The main contribution of our research work is the design of a novel text analytics-based framework that can extract latent semantics from the textual descriptions of projects to predict the fund raising outcomes of these projects. More specifically, we develop the Domain-Constraint Latent Dirichlet Allocation (DC-LDA) topic model for effective extraction of topical features from texts. Based on two real-world crowdfunding datasets, our experimental results reveal that the proposed framework outperforms a classical LDA-based method in predicting fund raising success by an average of 11% in terms of F1 score. The managerial implication of our research is that entrepreneurs can apply the proposed methodology to identify the most influential topical features embedded in project descriptions, Corresponding author at: School of Information, Renmin University of China, Beijing, 100872, P.R. China. Email address: [email protected] (H. Yuan), [email protected] (R.Y.K. Lau), [email protected] (W. Xu) AC C EP TE D M AN U SC R IP T ACCEPTED MANUSCRIPT 2 and hence to better promote their projects and improving the chance of raising sufficient funds for their projects.",
"title": ""
},
{
"docid": "07c185c21c9ce3be5754294a73ab5e3c",
"text": "In order to support efficient workflow design, recent commercial workflow systems are providing templates of common business processes. These templates, called cases, can be modified individually or collectively into a new workflow to meet the business specification. However, little research has been done on how to manage workflow models, including issues such as model storage, model retrieval, model reuse and assembly. In this paper, we propose a novel framework to support workflow modeling and design by adapting workflow cases from a repository of process models. Our approach to workflow model management is based on a structured workflow lifecycle and leverages recent advances in model management and case-based reasoning techniques. Our contributions include a conceptual model of workflow cases, a similarity flooding algorithm for workflow case retrieval, and a domain-independent AI planning approach to workflow case composition. We illustrate the workflow model management framework with a prototype system called Case-Oriented Design Assistant for Workflow Modeling (CODAW). 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "11d1978a3405f63829e02ccb73dcd75f",
"text": "The performance of two commercial simulation codes, Ansys Fluent and Comsol Multiphysics, is thoroughly examined for a recently established two-phase flow benchmark test case. In addition, the commercial codes are directly compared with the newly developed academic code, FeatFlow TP2D. The results from this study show that the commercial codes fail to converge and produce accurate results, and leave much to be desired with respect to direct numerical simulation of flows with free interfaces. The academic code on the other hand was shown to be computationally efficient, produced very accurate results, and outperformed the commercial codes by a magnitude or more.",
"title": ""
},
{
"docid": "a488a74817a8401eff1373d4e21f060f",
"text": "We propose a neural machine translation architecture that models the surrounding text in addition to the source sentence. These models lead to better performance, both in terms of general translation quality and pronoun prediction, when trained on small corpora, although this improvement largely disappears when trained with a larger corpus. We also discover that attention-based neural machine translation is well suited for pronoun prediction and compares favorably with other approaches that were specifically designed for this task.",
"title": ""
},
{
"docid": "3111ef9867be7cf58be9694cbe2a14d9",
"text": "Grammatical Error Diagnosis for Chinese has always been a challenge for both foreign learners and NLP researchers, for the variousity of grammar and the flexibility of expression. In this paper, we present a model based on Bidirectional Long Short-Term Memory(Bi-LSTM) neural networks, which treats the task as a sequence labeling problem, so as to detect Chinese grammatical errors, to identify the error types and to locate the error positions. In the corpora of this year’s shared task, there can be multiple errors in a single offset of a sentence, to address which, we simutaneously train three Bi-LSTM models sharing word embeddings which label Missing, Redundant and Selection errors respectively. We regard word ordering error as a special kind of word selection error which is longer during training phase, and then separate them by length during testing phase. In NLP-TEA 3 shared task for Chinese Grammatical Error Diagnosis(CGED), Our system achieved relatively high F1 for all the three levels in the traditional Chinese track and for the detection level in the Simpified Chinese track.",
"title": ""
},
{
"docid": "40413aa7fd92e042b8c359b2cf6d2d23",
"text": "Text summarization is the process of creating a short description of a specified text while preserving its information context. This paper tackles Arabic text summarization problem. The semantic redundancy and insignificance will be removed from the summarized text. This can be achieved by checking the text entailment relation, and lexical cohesion. Accordingly, a text summarization approach (called LCEAS) based on lexical cohesion and text entailment relation is developed. In LCEAS, text entailment approach is enhanced to suit Arabic language. Roots and semantic-relations are used between the senses of the words to extract the common words. New threshold values are specified to suit entailment based segmentation for Arabic text. LCEAS is a single document summarization, which is constructed using extraction technique. To evaluate LCEAS, its performance is compared with previous Arabic text summarization systems. Each system output is compared against Essex Arabic Summaries Corpus (EASC) corpus (the model summaries), using Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and Automatic Summarization Engineering (AutoSummEng) metrics. The outcome of LCEAS indicates that the developed approach outperforms the previous Arabic text summarization systems. KeywordsText Summarization; Text Segmentation; Lexical Cohesion; Text Entailment; Natural Language Processing.",
"title": ""
},
{
"docid": "e587b5954c957f268d21878ede3359f8",
"text": "ing audit logs",
"title": ""
},
{
"docid": "b31244421f89b32704509dfeb80702a0",
"text": "Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.",
"title": ""
},
{
"docid": "9664431f0cfc22567e1e5c945f898595",
"text": "Anomaly detection aims to detect abnormal events by a model of normality. It plays an important role in many domains such as network intrusion detection, criminal activity identity and so on. With the rapidly growing size of accessible training data and high computation capacities, deep learning based anomaly detection has become more and more popular. In this paper, a new domain-based anomaly detection method based on generative adversarial networks (GAN) is proposed. Minimum likelihood regularization is proposed to make the generator produce more anomalies and prevent it from converging to normal data distribution. Proper ensemble of anomaly scores is shown to improve the stability of discriminator effectively. The proposed method has achieved significant improvement than other anomaly detection methods on Cifar10 and UCI datasets.",
"title": ""
},
{
"docid": "b79bf80221c893f40abd7fd6b8a7145a",
"text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.",
"title": ""
},
{
"docid": "486e3f5614f69f60d8703d8641c73416",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "4331057bb0a3f3add576513fa71791a8",
"text": "The category theoretic structures of monads and comonads can be used as an abstraction mechanism for simplifying both language semantics and programs. Monads have been used to structure impure computations, whilst comonads have been used to structure context-dependent computations. Interestingly, the class of computations structured by monads and the class of computations structured by comonads are not mutually exclusive. This paper formalises and explores the conditions under which a monad and a comonad can both structure the same notion of computation: when a comonad is left adjoint to a monad. Furthermore, we examine situations where a particular monad/comonad model of computation is deficient in capturing the essence of a computational pattern and provide a technique for calculating an alternative monad or comonad structure which fully captures the essence of the computation. Included is some discussion on how to choose between a monad or comonad structure in the case where either can be used to capture a particular notion of computation.",
"title": ""
},
{
"docid": "70bed43cdfd50586e803bf1a9c8b3c0a",
"text": "We design a way to model apps as vectors, inspired by the recent deep learning approach to vectorization of words called word2vec. Our method relies on how users use apps. In particular, we visualize the time series of how each user uses mobile apps as a “document”, and apply the recent word2vec modeling on these documents, but the novelty is that the training context is carefully weighted by the time interval between the usage of successive apps. This gives us the app2vec vectorization of apps. We apply this to industrial scale data from Yahoo! and (a) show examples that app2vec captures semantic relationships between apps, much as word2vec does with words, (b) show using Yahoo!'s extensive human evaluation system that 82% of the retrieved top similar apps are semantically relevant, achieving 37% lift over bag-of-word approach and 140% lift over matrix factorization approach to vectorizing apps, and (c) finally, we use app2vec to predict app-install conversion and improve ad conversion prediction accuracy by almost 5%. This is the first industry scale design, training and use of app vectorization.",
"title": ""
},
{
"docid": "6cf9456d2fe55d2115fd40efbb1a8f96",
"text": "We propose a new approach to the problem of optimizing autoencoders for lossy image compression. New media formats, changing hardware technology, as well as diverse requirements and content types create a need for compression algorithms which are more flexible than existing codecs. Autoencoders have the potential to address this need, but are difficult to optimize directly due to the inherent non-differentiabilty of the compression loss. We here show that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs. Our network is furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. This is in contrast to previous work on autoencoders for compression using coarser approximations, shallower architectures, computationally expensive methods, or focusing on small images.",
"title": ""
},
{
"docid": "595a31e82d857cedecd098bf4c910e99",
"text": "Human actions in video sequences are three-dimensional (3D) spatio-temporal signals characterizing both the visual appearance and motion dynamics of the involved humans and objects. Inspired by the success of convolutional neural networks (CNN) for image classification, recent attempts have been made to learn 3D CNNs for recognizing human actions in videos. However, partly due to the high complexity of training 3D convolution kernels and the need for large quantities of training videos, only limited success has been reported. This has triggered us to investigate in this paper a new deep architecture which can handle 3D signals more effectively. Specifically, we propose factorized spatio-temporal convolutional networks (FstCN) that factorize the original 3D convolution kernel learning as a sequential process of learning 2D spatial kernels in the lower layers (called spatial convolutional layers), followed by learning 1D temporal kernels in the upper layers (called temporal convolutional layers). We introduce a novel transformation and permutation operator to make factorization in FstCN possible. Moreover, to address the issue of sequence alignment, we propose an effective training and inference strategy based on sampling multiple video clips from a given action video sequence. We have tested FstCN on two commonly used benchmark datasets (UCF-101 and HMDB-51). Without using auxiliary training videos to boost the performance, FstCN outperforms existing CNN based methods and achieves comparable performance with a recent method that benefits from using auxiliary training videos.",
"title": ""
}
] | scidocsrr |
00657ff4d15c007f5eb6e7c38849996f | Developing a Teacher Dashboard For Use with Intelligent Tutoring Systems | [
{
"docid": "26e24e4a59943f9b80d6bf307680b70c",
"text": "We present a machine-learned model that can automatically detect when a student using an intelligent tutoring system is off-task, i.e., engaged in behavior which does not involve the system or a learning task. This model was developed using only log files of system usage (i.e. no screen capture or audio/video data). We show that this model can both accurately identify each student's prevalence of off-task behavior and can distinguish off-task behavior from when the student is talking to the teacher or another student about the subject matter. We use this model in combination with motivational and attitudinal instruments, developing a profile of the attitudes and motivations associated with off-task behavior, and compare this profile to the attitudes and motivations associated with other behaviors in intelligent tutoring systems. We discuss how the model of off-task behavior can be used within interactive learning environments which respond to when students are off-task.",
"title": ""
},
{
"docid": "2adcf4db59bb321132a10445292d7fe9",
"text": "In this paper, we present work on learning analytics that aims to support learners and teachers through dashboard applications, ranging from small mobile applications to learnscapes on large public displays. Dashboards typically capture and visualize traces of learning activities, in order to promote awareness, reflection, and sense-making, and to enable learners to define goals and track progress toward these goals. Based on an analysis of our own work and a broad range of similar learning dashboards, we identify HCI issues for this exciting research area.",
"title": ""
},
{
"docid": "bed92439d0a455eb57d992728ef7deb5",
"text": "Although learning with Intelligent Tutoring Systems (ITS) has been well studied, little research has investigated what role teachers can play, if empowered with data. Many ITSs provide student performance reports, but they may not be designed to serve teachers’ needs well, which is important for a well-designed dashboard. We investigated what student data is most helpful to teachers and how they use data to adjust and individualize instruction. Specifically, we conducted Contextual Inquiry interviews with teachers and used Interpretation Sessions and Affinity Diagramming to analyze the data. We found that teachers generate data on students’ concept mastery, misconceptions and errors, and utilize data provided by ITSs and other software. Teachers use this data to drive instruction and remediate issues on an individual and class level. Our study uncovers how data can support teachers in helping students learn and provides a solid foundation and recommendations for designing a teacher’s dashboard.",
"title": ""
},
{
"docid": "273153d0cf32162acb48ed989fa6d713",
"text": "This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.",
"title": ""
}
] | [
{
"docid": "0bbdefaf90329b45993608128ccd233c",
"text": "Eye gaze tracking system has been widely researched for the replacement of the conventional computer interfaces such as the mouse and keyboard. In this paper, we propose the long range binocular eye gaze tracking system that works from 1.5 m to 2.5 m with allowing a head displacement in depth. The 3D position of the user's eye is obtained from the two wide angle cameras. A high resolution image of the eye is captured using the pan, tilt, and focus controlled narrow angle camera. The angles for maneuvering the pan and tilt motor are calculated by the proposed calibration method based on virtual camera model. The performance of the proposed calibration method is verified in terms of speed and convenience through the experiment. The narrow angle camera keeps tracking the eye while the user moves his head freely. The point-of-gaze (POG) of each eye onto the screen is calculated by using a 2D mapping based gaze estimation technique and the pupil center corneal reflection (PCCR) vector. PCCR vector modification method is applied to overcome the degradation in accuracy with displacements of the head in depth. The final POG is obtained by the average of the two POGs. Experimental results show that the proposed system robustly works for a large screen TV from 1.5 m to 2.5 m distance with displacements of the head in depth (+20 cm) and the average angular error is 0.69°.",
"title": ""
},
{
"docid": "450808fb3512ffd3bac692523e785c73",
"text": "This paper focuses on approaches to building a text automatic summarization model for news articles, generating a one-sentence summarization that mimics the style of a news title given some paragraphs. We managed to build and train two relatively complex deep learning models that outperformed our baseline model, which is a simple feed forward neural network. We explored Recurrent Neural Network models with encoder-decoder using LSTM and GRU cells, and with/without attention. We obtained some results that we then measured by calculating their respective ROUGE scores with respect to the actual references. For future work, we believe abstractive method of text summarization is a power way of summarizing texts, and we will continue with this approach. We think that the deficiencies currently embedded in our language model can be improved by better fine-tuning the model, more deep-learning method exploration, as well as larger training dataset.",
"title": ""
},
{
"docid": "de6348bb8e3b4c1cfd1fa83557ae50c9",
"text": "Cerebellar lesions can cause motor deficits and/or the cerebellar cognitive affective syndrome (CCAS; Schmahmann's syndrome). We used voxel-based lesion-symptom mapping to test the hypothesis that the cerebellar motor syndrome results from anterior lobe damage whereas lesions in the posterolateral cerebellum produce the CCAS. Eighteen patients with isolated cerebellar stroke (13 males, 5 females; 20-66 years old) were evaluated using measures of ataxia and neurocognitive ability. Patients showed a wide range of motor and cognitive performance, from normal to severely impaired; individual deficits varied according to lesion location within the cerebellum. Patients with damage to cerebellar lobules III-VI had worse ataxia scores: as predicted, the cerebellar motor syndrome resulted from lesions involving the anterior cerebellum. Poorer performance on fine motor tasks was associated primarily with strokes affecting the anterior lobe extending into lobule VI, with right-handed finger tapping and peg-placement associated with damage to the right cerebellum, and left-handed finger tapping associated with left cerebellar damage. Patients with the CCAS in the absence of cerebellar motor syndrome had damage to posterior lobe regions, with lesions leading to significantly poorer scores on language (e.g. right Crus I and II extending through IX), spatial (bilateral Crus I, Crus II, and right lobule VIII), and executive function measures (lobules VII-VIII). These data reveal clinically significant functional regions underpinning movement and cognition in the cerebellum, with a broad anterior-posterior distinction. Motor and cognitive outcomes following cerebellar damage appear to reflect the disruption of different cerebro-cerebellar motor and cognitive loops.",
"title": ""
},
{
"docid": "f4166e4121dbd6f6ab209e6d99aac63f",
"text": "In this paper, we propose several novel deep learning methods for object saliency detection based on the powerful convolutional neural networks. In our approach, we use a gradient descent method to iteratively modify an input image based on the pixel-wise gradients to reduce a cost function measuring the class-specific objectness of the image. The pixel-wise gradients can be efficiently computed using the back-propagation algorithm. The discrepancy between the modified image and the original one may be used as a saliency map for the image. Moreover, we have further proposed several new training methods to learn saliency-specific convolutional nets for object saliency detection, in order to leverage the available pixel-wise segmentation information. Our methods are extremely computationally efficient (processing 20-40 images per second in one GPU). In this work, we use the computed saliency maps for image segmentation. Experimental results on two benchmark tasks, namely Microsoft COCO and Pascal VOC 2012, have shown that our proposed methods can generate high-quality salience maps, clearly outperforming many existing methods. In particular, our approaches excel in handling many difficult images, which contain complex background, highly-variable salient objects, multiple objects, and/or very small salient objects.",
"title": ""
},
{
"docid": "e118177a0fc9fad704b2be958b01a873",
"text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.",
"title": ""
},
{
"docid": "fe6f81141e58bf5cf13bec80e033e197",
"text": "Recommender systems represent user preferences for the purpose of suggesting items to purchase or examine. They have become fundamental applications in electronic commerce and information access, providing suggestions that effectively prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendation, including content-based, collaborative, knowledge-based and other techniques. To improve performance, these methods have sometimes been combined in hybrid recommenders. This paper surveys the landscape of actual and possible hybrid recommenders, and introduces a novel hybrid, system that combines content-based recommendation and collaborative filtering to recommend restaurants.",
"title": ""
},
{
"docid": "6b252d02e013519d1bd12dfcb3641013",
"text": "BACKGROUND\nDuplex ultrasound investigation has become the reference standard in assessing the morphology and haemodynamics of the lower limb veins. The project described in this paper was an initiative of the Union Internationale de Phlébologie (UIP). The aim was to obtain a consensus of international experts on the methodology to be used for assessment of anatomy of superficial and perforating veins in the lower limb by ultrasound imaging.\n\n\nMETHODS\nThe authors performed a systematic review of the published literature on duplex anatomy of the superficial and perforating veins of the lower limbs; afterwards they invited a group of experts from a wide range of countries to participate in this project. Electronic submissions from the authors and the experts (text and images) were made available to all participants via the UIP website. The authors prepared a draft document for discussion at the UIP Chapter meeting held in San Diego, USA in August 2003. Following this meeting a revised manuscript was circulated to all participants and further comments were received by the authors and included in subsequent versions of the manuscript. Eventually, all participants agreed the final version of the paper.\n\n\nRESULTS\nThe experts have made detailed recommendations concerning the methods to be used for duplex ultrasound examination as well as the interpretation of images and measurements obtained. This document provides a detailed methodology for complete ultrasound assessment of the anatomy of the superficial and perforating veins in the lower limbs.\n\n\nCONCLUSIONS\nThe authors and a large group of experts have agreed a methodology for the investigation of the lower limbs venous system by duplex ultrasonography, with specific reference to the anatomy of the main superficial veins and perforators of the lower limbs in healthy and varicose subjects.",
"title": ""
},
{
"docid": "ff272e6b59a3069372a694f99963929d",
"text": "Nowadays, Information Technology (IT) plays an important role in efficiency and effectiveness of the organizational performance. As an IT application, Enterprise Resource Planning (ERP) systems is considered one of the most important IT applications because it enables the organizations to connect and interact with its administrative units in order to manage data and organize internal procedures. Many institutions use ERP systems, most notably Higher Education Institutions (HEIs). However, many projects fail or exceed scheduling and budget constraints; the rate of failure in HEIs sector is higher than in other sectors. With HEIs’ recent movement to implement ERP systems and the lack of research studies examining successful implementation in HEIs, this paper provides a critical literature review with a special focus on Saudi Arabia. Further, it defines Critical Success Factors (CSFs) contributing to the success of ERP implementation in HEIs. This paper is part of a larger research effort aiming to provide guidelines and useful findings that help HEIs to manage the challenges for ERP systems and define CSFs that will help practitioners to implement them in the Saudi context.",
"title": ""
},
{
"docid": "8f9bf08bb52e5c192512f7b43ed50ba7",
"text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.",
"title": ""
},
{
"docid": "72147e489de9053bf1a4844c2f0de717",
"text": "Video Question Answering is a challenging problem in visual information retrieval, which provides the answer to the referenced video content according to the question. However, the existing visual question answering approaches mainly tackle the problem of static image question, which may be ineffectively for video question answering due to the insufficiency of modeling the temporal dynamics of video contents. In this paper, we study the problem of video question answering by modeling its temporal dynamics with frame-level attention mechanism. We propose the attribute-augmented attention network learning framework that enables the joint frame-level attribute detection and unified video representation learning for video question answering. We then incorporate the multi-step reasoning process for our proposed attention network to further improve the performance. We construct a large-scale video question answering dataset. We conduct the experiments on both multiple-choice and open-ended video question answering tasks to show the effectiveness of the proposed method.",
"title": ""
},
{
"docid": "bfdbc3814d517df9859294bd53885aa2",
"text": "The Internet of Things (IoT) is the next big wave in computing characterized by large scale open ended heterogeneous network of things, with varying sensing, actuating, computing and communication capabilities. Compared to the traditional field of autonomic computing, the IoT is characterized by an open ended and highly dynamic ecosystem with variable workload and resource availability. These characteristics make it difficult to implement self-awareness capabilities for IoT to manage and optimize itself. In this work, we introduce a methodology to explore and learn the trade-offs of different deployment configurations to autonomously optimize the QoS and other quality attributes of IoT applications. Our experiments demonstrate that our proposed methodology can automate the efficient deployment of IoT applications in the presence of multiple optimization objectives and variable operational circumstances.",
"title": ""
},
{
"docid": "3a6c58a05427392750d15307fda4faec",
"text": "In this paper, we present the design of a low voltage bandgap reference (LVBGR) circuit for supply voltage of 1.2V which can generate an output reference voltage of 0.363V. Traditional BJT based bandgap reference circuits give very precise output reference but power and area consumed by these BJT devices is larger so for low supply bandgap reference we chose MOSFETs operating in subthreshold region based reference circuits. LVBGR circuits with less sensitivity to supply voltage and temperature is used in both analog and digital circuits like high precise comparators used in data converter, phase-locked loop, ring oscillator, memory systems, implantable biomedical product etc. In the proposed circuit subthreshold MOSFETs temperature characteristics are used to achieve temperature compensation of output voltage reference and it can work under very low supply voltage. A PMOS structure 2stage opamp which will be operating in subthreshold region is designed for the proposed LVBGR circuit whose gain is 89.6dB and phase margin is 74 °. Finally a LVBGR circuit is designed which generates output voltage reference of 0.364V given with supply voltage of 1.2 V with 10 % variation and temperature coefficient of 240ppm/ °C is obtained for output reference voltage variation with respect to temperature over a range of 0 to 100°C. The output reference voltage exhibits a variation of 230μV with a supply range of 1.08V to 1.32V at typical process corner. The proposed LVBGR circuit for 1.2V supply is designed with the Mentor Graphics Pyxis tool using 130nm technology with EldoSpice simulator. Overall current consumed by the circuit is 900nA and also the power consumed by the entire LVBGR circuit is 0.9μW and the PSRR of the LVBGR circuit is -70dB.",
"title": ""
},
{
"docid": "daef1d0005da14d3a5717bf400cd69e7",
"text": "Deep learning methods have typically been trained on large datasets in which many training examples are available. However, many real-world product datasets have only a small number of images available for each product. We explore the use of deep learning methods for recognizing object instances when we have only a single training example per class. We show that feedforward neural networks outperform state-of-the-art methods for recognizing objects from novel viewpoints even when trained from just a single image per object. To further improve our performance on this task, we propose to take advantage of a supplementary dataset in which we observe a separate set of objects from multiple viewpoints. We introduce a new approach for training deep learning methods for instance recognition with limited training data, in which we use an auxiliary multi-view dataset to train our network to be robust to viewpoint changes. We find that this approach leads to a more robust classifier for recognizing objects from novel viewpoints, outperforming previous state-of-the-art approaches including keypoint-matching, template-based techniques, and sparse coding.",
"title": ""
},
{
"docid": "6960f780dfc491c6cdcbb6c53fd32363",
"text": "We learn to compute optical flow by combining a classical spatial-pyramid formulation with deep learning. This estimates large motions in a coarse-to-fine approach by warping one image of a pair at each pyramid level by the current flow estimate and computing an update to the flow. Instead of the standard minimization of an objective function at each pyramid level, we train one deep network per level to compute the flow update. Unlike the recent FlowNet approach, the networks do not need to deal with large motions, these are dealt with by the pyramid. This has several advantages. First, our Spatial Pyramid Network (SPyNet) is much simpler and 96% smaller than FlowNet in terms of model parameters. This makes it more efficient and appropriate for embedded applications. Second, since the flow at each pyramid level is small (",
"title": ""
},
{
"docid": "b2853b59ffb0cb70bd2f4a3cb0c03e1d",
"text": "This paper presents a waveform modeling and generation method for speech bandwidth extension (BWE) using stacked dilated convolutional neural networks (CNNs) with causal or non-causal convolutional layers. Such dilated CNNs describe the predictive distribution for each wideband or high-frequency speech sample conditioned on the input narrowband speech samples. Distinguished from conventional frame-based BWE approaches, the proposed methods can model the speech waveforms directly and therefore avert the spectral conversion and phase estimation problems. Experimental results prove that the BWE methods proposed in this paper can achieve better performance than the state-of-the-art frame-based approach utilizing recurrent neural networks (RNNs) incorporating long shortterm memory (LSTM) cells in subjective preference tests.",
"title": ""
},
{
"docid": "b049e5249d3c0fc52706a54ee767480e",
"text": "In dialogical argumentation, it is often assumed that the involved parties will always correctly identify the intended statements posited by each other and realize all of the associated relations, conform to the three acceptability states (accepted, rejected, undecided), adjust their views whenever new and correct information comes in, and that a framework handling only attack relations is sufficient to represent their opinions. Although it is natural to make these assumptions as a starting point for further research, dropping some of them has become quite challenging. Probabilistic argumentation is one of the approaches that can be harnessed for more accurate user modelling. The epistemic approach allows us to represent how much a given argument is believed or disbelieved by a given person, offering us the possibility to express more than just three agreement states. It comes equipped with a wide range of postulates, including those that do not make any restrictions concerning how initial arguments should be viewed. Thus, this approach is potentially more suitable for handling beliefs of the people that have not fully disclosed their opinions or counterarguments with respect to standard Dung’s semantics. The constellation approach can be used to represent the views of different people concerning the structure of the framework we are dealing with, including situations in which not all relations are acknowledged or when they are seen differently than intended. Finally, bipolar argumentation frameworks can be used to express both positive and negative relations between arguments. In this paper we will describe the results of an experiment in which participants were asked to judge dialogues in terms of agreement and structure. We will compare our findings with the aforementioned assumptions as well as with the constellation and epistemic approaches to probabilistic argumentation and bipolar argumentation. Keywords— Dialogical argumentation, probabilistic argumentation, abstract argumentation ∗This research is funded by EPSRC Project EP/N008294/1 “Framework for Computational Persuasion”.We thank the reviewers for their valuable comments that helped us to improve this paper.",
"title": ""
},
{
"docid": "6e8a9c37672ec575821da5c9c3145500",
"text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "604b46c973be0a277faa96a407dc845f",
"text": "A nonlinear dynamic model for a quadrotor unmanned aerial vehicle is presented with a new vision of state parameter control which is based on Euler angles and open loop positions state observer. This method emphasizes on the control of roll, pitch and yaw angle rather than the translational motions of the UAV. For this reason the system has been presented into two cascade partial parts, the first one relates the rotational motion whose the control law is applied in a closed loop form and the other one reflects the translational motion. A dynamic feedback controller is developed to transform the closed loop part of the system into linear, controllable and decoupled subsystem. The wind parameters estimation of the quadrotor is used to avoid more sensors. Hence an estimator of resulting aerodynamic moments via Lyapunov function is developed. Performance and robustness of the proposed controller are tested in simulation.",
"title": ""
},
{
"docid": "5f49c93d7007f0f14f1410ce7805b29a",
"text": "Die Psychoedukation im Sinne eines biopsychosozialen Schmerzmodells zielt auf das Erkennen und Verändern individueller schmerzauslösender und -aufrechterhaltender Faktoren ab. Der Einfluss kognitiver Bewertungen, emotionaler Verarbeitungsprozesse und schmerzbezogener Verhaltensweisen steht dabei im Mittelpunkt. Die Anregung und Anleitung zu einer verbesserten Selbstbeobachtung stellt die Voraussetzung zum Einsatz aktiver Selbstkontrollstrategien und zur Erhöhung der Selbstwirksamkeitserwartung dar. Dazu zählt die Entwicklung und Erarbeitung von Schmerzbewältigungsstrategien wie z. B. Aufmerksamkeitslenkung und Genusstraining. Eine besondere Bedeutung kommt dem Aufbau einer Aktivitätenregulation zur Strukturierung eines angemessenen Verhältnisses von Erholungs- und Anforderungsphasen zu. Interventionsmöglichkeiten stellen hier die Vermittlung von Entspannungstechniken, Problemlösetraining, spezifisches Kompetenztraining sowie Elemente der kognitiven Therapie dar. Der Aufbau alternativer kognitiver und handlungsbezogener Lösungsansätze dient einer verbesserten Bewältigung internaler und externaler Stressoren. Genutzt werden die förderlichen Bedingungen gruppendynamischer Prozesse. Einzeltherapeutische Interventionen dienen der Bearbeitung spezifischer psychischer Komorbiditäten und der individuellen Unterstützung bei der beruflichen und sozialen Wiedereingliederung. Providing the patient with a pain model based on the biopsychosocial approach is one of the most important issues in psychological intervention. Illness behaviour is influenced by pain-eliciting and pain-aggravating thoughts. Identification and modification of these thoughts is essential and aims to change cognitive evaluations, emotional processing, and pain-referred behaviour. Improved self-monitoring concerning maladaptive thoughts, feelings, and behaviour enables functional coping strategies (e.g. attention diversion and learning to enjoy things) and enhances self-efficacy expectancies. Of special importance is the establishment of an appropriate balance between stress and recreation. Intervention options include teaching relaxation techniques, problem-solving strategies, and specific skills as well as applying appropriate elements of cognitive therapy. The development of alternative cognitive and action-based strategies improves the patient’s ability to cope with internal and external stressors. All of the psychological elements are carried out in a group setting. Additionally, individual therapy is offered to treat comorbidities or to support reintegration into the patient’s job.",
"title": ""
}
] | scidocsrr |
41cf9a3cade6991077fdfdff28417747 | Data Mining Techniques for Detecting Household Characteristics Based on Smart Meter Data | [
{
"docid": "8e4eb520c80dfa8d39c69b1273ea89c8",
"text": "This paper examines the potential impact of automatic meter reading (AMR) on short-term load forecasting for a residential customer. Real-time measurement data from customers' smart meters provided by a utility company is modeled as the sum of a deterministic component and a Gaussian noise signal. The shaping filter for the Gaussian noise is calculated using spectral analysis. Kalman filtering is then used for load prediction. The accuracy of the proposed method is evaluated for different sampling periods and planning horizons. The results show that the availability of more real-time measurement data improves the accuracy of the load forecast significantly. However, the improved prediction accuracy can come at a high computational cost. Our results qualitatively demonstrate that achieving the desired prediction accuracy while avoiding a high computational load requires limiting the volume of data used for prediction. Consequently, the measurement sampling rate must be carefully selected as a compromise between these two conflicting requirements.",
"title": ""
},
{
"docid": "841f2ab48d111a6b70b2a3171c155f44",
"text": "In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of input-sequences, and a number of other database parameters. Finally, we discuss how the results of sequence mining can be applied in a real application domain.",
"title": ""
}
] | [
{
"docid": "62ca2853492b017a052b9bf5e9b955ff",
"text": "This paper describes our attempt to build a sentiment analysis system for Indonesian tweets. With this system, we can study and identify sentiments and opinions in a text or document computationally. We used four thousand manually labeled tweets collected in February and March 2016 to build the model. Because of the variety of content in tweets, we analyze tweets into eight groups in total, including pos(itive), neg(ative), and neu(tral). Finally, we obtained 73.2% accuracy with Long Short Term Memory (LSTM) without normalizer.",
"title": ""
},
{
"docid": "3223563162967868075a43ca86c1d31a",
"text": "Deep learning research aims at discovering learning algorithms that discover multiple levels of distributed representations, with higher levels representing more abstract concepts. Although the study of deep learning has already led to impressive theoretical results, learning algorithms and breakthrough experiments, several challenges lie ahead. This paper proposes to examine some of these challenges, centering on the questions of scaling deep learning algorithms to much larger models and datasets, reducing optimization difficulties due to ill-conditioning or local minima, designing more efficient and powerful inference and sampling procedures, and learning to disentangle the factors of variation underlying the observed data. It also proposes a few forward-looking research directions aimed at overcoming these",
"title": ""
},
{
"docid": "559a4175347e5fea57911d9b8c5080e6",
"text": "Online social networks offering various services have become ubiquitous in our daily life. Meanwhile, users nowadays are usually involved in multiple online social networks simultaneously to enjoy specific services provided by different networks. Formally, social networks that share some common users are named as partially aligned networks. In this paper, we want to predict the formation of social links in multiple partially aligned social networks at the same time, which is formally defined as the multi-network link (formation) prediction problem. In multiple partially aligned social networks, users can be extensively correlated with each other by various connections. To categorize these diverse connections among users, 7 \"intra-network social meta paths\" and 4 categories of \"inter-network social meta paths\" are proposed in this paper. These \"social meta paths\" can cover a wide variety of connection information in the network, some of which can be helpful for solving the multi-network link prediction problem but some can be not. To utilize useful connection, a subset of the most informative \"social meta paths\" are picked, the process of which is formally defined as \"social meta path selection\" in this paper. An effective general link formation prediction framework, Mli (Multi-network Link Identifier), is proposed in this paper to solve the multi-network link (formation) prediction problem. Built with heterogenous topological features extracted based on the selected \"social meta paths\" in the multiple partially aligned social networks, Mli can help refine and disambiguate the prediction results reciprocally in all aligned networks. Extensive experiments conducted on real-world partially aligned heterogeneous networks, Foursquare and Twitter, demonstrate that Mli can solve the multi-network link prediction problem very well.",
"title": ""
},
{
"docid": "1b0a8696b0bf79c118c5b02a7a2f4d7c",
"text": "Mechanical properties of living cells are commonly described in terms of the laws of continuum mechanics. The purpose of this report is to consider the implications of an alternative approach that emphasizes the discrete nature of stress bearing elements in the cell and is based on the known structural properties of the cytoskeleton. We have noted previously that tensegrity architecture seems to capture essential qualitative features of cytoskeletal shape distortion in adherent cells (Ingber, 1993a; Wang et al., 1993). Here we extend those qualitative notions into a formal microstructural analysis. On the basis of that analysis we attempt to identify unifying principles that might underlie the shape stability of the cytoskeleton. For simplicity, we focus on a tensegrity structure containing six rigid struts interconnected by 24 linearly elastic cables. Cables carry initial tension (‘‘prestress’’) counterbalanced by compression of struts. Two cases of interconnectedness between cables and struts are considered: one where they are connected by pin-joints, and the other where the cables run through frictionless loops at the junctions. At the molecular level, the pinned structure may represent the case in which different cytoskeletal filaments are cross-linked whereas the looped structure represents the case where they are free to slip past one another. The system is then subjected to uniaxial stretching. Using the principal of virtual work, stretching force vs. extension and structural stiffness vs. stretching force relationships are calculated for different prestresses. The stiffness is found to increase with increasing prestress and, at a given prestress, to increase approximately linearly with increasing stretching force. This behavior is consistent with observations in living endothelial cells exposed to shear stresses (Wang & Ingber, 1994). At a given prestress, the pinned structure is found to be stiffer than the looped one, a result consistent with data on mechanical behavior of isolated, cross-linked and uncross-linked actin networks (Wachsstock et al., 1993). On the basis of our analysis we concluded that architecture and the prestress of the cytoskeleton might be key features that underlie a cell’s ability to regulate its shape. 7 1996 Academic Press Limited",
"title": ""
},
{
"docid": "75189509743ba4f329b5ea5877f0e8ad",
"text": "The psychology of conspiracy theory beliefs is not yet well understood, although research indicates that there are stable individual differences in conspiracist ideation - individuals' general tendency to engage with conspiracy theories. Researchers have created several short self-report measures of conspiracist ideation. These measures largely consist of items referring to an assortment of prominent conspiracy theories regarding specific real-world events. However, these instruments have not been psychometrically validated, and this assessment approach suffers from practical and theoretical limitations. Therefore, we present the Generic Conspiracist Beliefs (GCB) scale: a novel measure of individual differences in generic conspiracist ideation. The scale was developed and validated across four studies. In Study 1, exploratory factor analysis of a novel 75-item measure of non-event-based conspiracist beliefs identified five conspiracist facets. The 15-item GCB scale was developed to sample from each of these themes. Studies 2, 3, and 4 examined the structure and validity of the GCB, demonstrating internal reliability, content, criterion-related, convergent and discriminant validity, and good test-retest reliability. In sum, this research indicates that the GCB is a psychometrically sound and practically useful measure of conspiracist ideation, and the findings add to our theoretical understanding of conspiracist ideation as a monological belief system unpinned by a relatively small number of generic assumptions about the typicality of conspiratorial activity in the world.",
"title": ""
},
{
"docid": "2271347e3b04eb5a73466aecbac4e849",
"text": "[1] Robin Jia, Percy Liang. “Adversarial examples for evaluating reading comprehension systems.” In EMNLP 2017. [2] Caiming Xiong, Victor Zhong, Richard Socher. “DCN+ Mixed objective and deep residual coattention for question answering.” In ICLR 2018. [3] Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes. “Reading wikipedia to answer open-domain questions.” In ACL 2017. Check out more of our work at https://einstein.ai/research Method",
"title": ""
},
{
"docid": "65d3d020ee63cdeb74cb3da159999635",
"text": "We investigated the effects of format of an initial test and whether or not students received corrective feedback on that test on a final test of retention 3 days later. In Experiment 1, subjects studied four short journal papers. Immediately after reading each paper, they received either a multiple choice (MC) test, a short answer (SA) test, a list of statements to read, or a filler task. The MC test, SA test, and list of statements tapped identical facts from the studied material. No feedback was provided during the initial tests. On a final test 3 days later (consisting of MC and SA questions), having had an intervening MC test led to better performance than an intervening SA test, but the intervening MC condition did not differ significantly from the read statements condition. To better equate exposure to test-relevant information, corrective feedback during the initial tests was introduced in Experiment 2. With feedback provided, having had an intervening SA test led to the best performance on the final test, suggesting that the more demanding the retrieval processes engendered by the intervening test, the greater the benefit to final retention. The practical application of these findings is that regular SA quizzes with feedback may be more effective in enhancing student learning than repeated presentation of target facts or taking an MC quiz.",
"title": ""
},
{
"docid": "dd9b6b67f19622bfffbad427b93a1829",
"text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.",
"title": ""
},
{
"docid": "6d594c21ff1632b780b510620484eb62",
"text": "The last several years have seen intensive interest in exploring neural-networkbased models for machine comprehension (MC) and question answering (QA). In this paper, we approach the problems by closely modelling questions in a neural network framework. We first introduce syntactic information to help encode questions. We then view and model different types of questions and the information shared among them as an adaptation task and proposed adaptation models for them. On the Stanford Question Answering Dataset (SQuAD), we show that these approaches can help attain better results over a competitive baseline.",
"title": ""
},
{
"docid": "e02707b857a51a5f4b98de1b592f5cc3",
"text": "This paper presents a formal analysis of the train to trackside communication protocols used in the European Railway Tra c Management System (ERTMS) standard, and in particular the EuroRadio protocol. This protocol is used to secure important commands sent between train and trackside, such as movement authority and emergency stop messages. We perform our analysis using the applied pi-calculus and the ProVerif tool. This provides a powerful and expressive framework for protocol analysis and allows to check a wide range of security properties based on checking correspondence assertions. We show how it is possible to model the protocol’s counter-style timestamps in this framework. We define ProVerif assertions that allow us to check for secrecy of long and short term keys, authenticity of entities, message insertion, deletion, replay and reordering. We find that the protocol provides most of these security features, however it allows undetectable message deletion and the forging of emergency messages. We discuss the relevance of these results and make recommendations to further enhance the security of ERTMS.",
"title": ""
},
{
"docid": "25b183ce7ecc4b9203686c7ea68aacea",
"text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.",
"title": ""
},
{
"docid": "2c73318b59e5d7101884f2563dd700b5",
"text": "BACKGROUND\nEffective control of (upright) body posture requires a proper representation of body orientation. Stroke patients with pusher syndrome were shown to suffer from severely disturbed perception of own body orientation. They experience their body as oriented 'upright' when actually tilted by nearly 20 degrees to the ipsilesional side. Thus, it can be expected that postural control mechanisms are impaired accordingly in these patients. Our aim was to investigate pusher patients' spontaneous postural responses of the non-paretic leg and of the head during passive body tilt.\n\n\nMETHODS\nA sideways tilting motion was applied to the trunk of the subject in the roll plane. Stroke patients with pusher syndrome were compared to stroke patients not showing pushing behaviour, patients with acute unilateral vestibular loss, and non brain damaged subjects.\n\n\nRESULTS\nCompared to all groups without pushing behaviour, the non-paretic leg of the pusher patients showed a constant ipsiversive tilt across the whole tilt range for an amount which was observed in the non-pusher subjects when they were tilted for about 15 degrees into the ipsiversive direction.\n\n\nCONCLUSION\nThe observation that patients with acute unilateral vestibular loss showed no alterations of leg posture indicates that disturbed vestibular afferences alone are not responsible for the disordered leg responses seen in pusher patients. Our results may suggest that in pusher patients a representation of body orientation is disturbed that drives both conscious perception of body orientation and spontaneous postural adjustment of the non-paretic leg in the roll plane. The investigation of the pusher patients' leg-to-trunk orientation thus could serve as an additional bedside tool to detect pusher syndrome in acute stroke patients.",
"title": ""
},
{
"docid": "0950052c92b4526c253acc0d4f0f45a0",
"text": "Pictogram communication is successful when participants at both end of the communication channel share a common pictogram interpretation. Not all pictograms carry universal interpretation, however; the issue of ambiguous pictogram interpretation must be addressed to assist pictogram communication. To unveil the ambiguity possible in pictogram interpretation, we conduct a human subject experiment to identify culture-specific criteria employed by humans by detecting cultural differences in pictogram interpretations. Based on the findings, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given pictogram category. The proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. The WordNet, the ChaSen, and the EDR Electronic Dictionary registered to the Language Grid are utilized to merge synonymous pictogram interpretations and to categorize pictogram interpretations into super-concept categories. We show how the Language Grid can assist the crosscultural research process.",
"title": ""
},
{
"docid": "b1cabb319ce759343ad3f043c7d86b14",
"text": "We consider the problem of scheduling tasks requiring certain processing times on one machine so that the busy time of the machine is maximized. The problem is to find a probabilistic online algorithm with reasonable worst case performance ratio. We answer an open problem of Lipton and Tompkins concerning the best possible ratio that can be achieved. Furthermore, we extend their results to anm-machine analogue. Finally, a variant of the problem is analyzed, in which the machine is provided with a buffer to store one job. Wir betrachten das Problem der Zuteilung von Aufgaben bestimmter Rechenzeit auf einem Rechner, um so seine Auslastung zu maximieren. Die Aufgabe besteht darin, einen probabilistischen Online-Algorithmus mit vernünftigem worst-case Performance-Verhältnis zu finden. Wir geben die Antwort auf ein offenes Problem von Lipton und Tompkins, das das bestmögliche Verhältnis betrifft. Weiter verallgemeinern wir ihre Ergebnisse auf einm-Maschinen-Analogon. Schließlich wird eine Variante des Problems analysiert, in dem der Rechner mit einem Zwischenspeicher für einen Job versehen ist.",
"title": ""
},
{
"docid": "5063adc5020cacddb5a4c6fd192fc17e",
"text": "In this paper, A Novel 1 to 4 modified Wilkinson power divider operating over the frequency range of (3 GHz to 8 GHz) is proposed. The design perception of the proposed divider based on two different stages and printed on FR4 (Epoxy laminate material) with the thickness of 1.57mm and єr =4.3 respectively. The modified design of this power divider including curved corners instead of the sharp edges and some modification in the length of matching stubs. In addition, this paper contain the power divider with equal power split at all ports, reasonable insertion loss, acceptable return loss below −10 dB, good impedance matching at all ports and satisfactory isolation performance has been obtained over the mentioned frequency range. The design concept and optimization development is practicable through CST simulation software.",
"title": ""
},
{
"docid": "66af4d496e98e4b407922fbe9970a582",
"text": "Automatic summarization of open-domain spoken dialogues is a relatively new research area. This article introduces the task and the challenges involved and motivates and presents an approach for obtaining automatic-extract summaries for human transcripts of multiparty dialogues of four different genres, without any restriction on domain. We address the following issues, which are intrinsic to spoken-dialogue summarization and typically can be ignored when summarizing written text such as news wire data: (1) detection and removal of speech disfluencies; (2) detection and insertion of sentence boundaries; and (3) detection and linking of cross-speaker information units (question-answer pairs). A system evaluation is performed using a corpus of 23 dialogue excerpts with an average duration of about 10 minutes, comprising 80 topical segments and about 47,000 words total. The corpus was manually annotated for relevant text spans by six human annotators. The global evaluation shows that for the two more informal genres, our summarization system using dialogue-specific components significantly outperforms two baselines: (1) a maximum-marginal-relevance ranking algorithm using TFIDF term weighting, and (2) a LEAD baseline that extracts the first n words from a text.",
"title": ""
},
{
"docid": "12cde236faadf6be0edf7b3699fc7a6c",
"text": "for 4F2 DRAM Cell Array with sub 40 nm Technology Jae-Man Yoon, Kangyoon Lee, Seung-Bae Park, Seong-Goo Kim, Hyoung-Won Seo, Young-Woong Son, Bong-Soo Kim, Hyun-Woo Chung, Choong-Ho Lee*, Won-Sok Lee* *, Dong-Chan Kim* * *, Donggun Park*, Wonshik Lee and Byung-Il Ryu ATD Team, Device Research Team*, CAEP*, PD Team***, Semiconductor R&D Division, Samsung Electronics Co., San #24, Nongseo-Dong, Kiheung-Gu, Yongin-City, Kyunggi-Do, 449-711, Korea Tel) 82-31-209-4741, Fax) 82-31-209-3274, E-mail)",
"title": ""
},
{
"docid": "12d565f0aaa6960e793b96f1c26cb103",
"text": "The new western Mode 5 IFF (Identification Foe or Friend) system is introduced. Based on analysis of signal features and format characteristics of Mode 5, a new signal detection method using phase and Amplitude correlation is put forward. This method utilizes odd and even channels to separate the signal, and then the separated signals are performed correlation with predefined mask. Through detecting preamble, the detection of Mode 5 signal is implemented. Finally, simulation results show the validity of the proposed method.",
"title": ""
},
{
"docid": "e5bf42029c05ceadebd9fc4205446192",
"text": "To demonstrate generality and to illustrate some additional properties of the method, we also apply the explanation method to a second domain: classifying news stories. The 20 newsgroups data set is a benchmark data set used in document classification research. It contains about 20,000 news items from 20 newsgroups representing different topics, and has a vocabulary of 26,214 different words (after stemming) (Lang 1995). The 20 topics can be categorized into seven top-level usenet categories with related news items: alternative (alt), computers (comp), miscellaneous (misc), recreation (rec), science (sci), society (soc), and talk (talk). One typical problem studied with this data set is to build classifiers to identify stories from these seven high-level news categories, which for our purposes gives a wide variety of different topics across which to provide document classification explanations. Looking at the seven high-level categories also provides realistic richness to the task: in many real document classification tasks, the class of interest is actually a collection (disjunction) of related concepts (consider, for example, “hate speech” in the safe-advertising domain).",
"title": ""
},
{
"docid": "733dc724bd0abf127c05a7717476a542",
"text": "By analogy with Internet of things, Internet of vehicles (IoV) that enables ubiquitous information exchange and content sharing among vehicles with little or no human intervention is a key enabler for the intelligent transportation industry. In this paper, we study how to combine both the physical and social layer information for realizing rapid content dissemination in device-to-device vehicle-to-vehicle (D2D-V2V)-based IoV networks. In the physical layer, headway distance of vehicles is modeled as a Wiener process, and the connection probability of D2D-V2V links is estimated by employing the Kolmogorov equation. In the social layer, the social relationship tightness that represents content selection similarities is obtained by Bayesian nonparametric learning based on real-world social big data, which are collected from the largest Chinese microblogging service Sina Weibo and the largest Chinese video-sharing site Youku. Then, a price-rising-based iterative matching algorithm is proposed to solve the formulated joint peer discovery, power control, and channel selection problem under various quality-of-service requirements. Finally, numerical results demonstrate the effectiveness and superiority of the proposed algorithm from the perspectives of weighted sum rate and matching satisfaction gains.",
"title": ""
}
] | scidocsrr |
cf927ae8013d8b1a54636d89d12a9e48 | Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences | [
{
"docid": "0e6b54a70a1604caf7449c8eb1286d5e",
"text": "Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and nonexpert readers in statistics, computer science, mathematics, and engineering.",
"title": ""
}
] | [
{
"docid": "1bf69a2bffe2652e11ff8ec7f61b7c0d",
"text": "This research proposes and validates a design theory for digital platforms that support online communities (DPsOC). It addresses ways in which digital platforms can effectively support social interactions in online communities. Drawing upon prior literature on IS design theory, online communities, and platforms, we derive an initial set of propositions for designing effective DPsOC. Our overarching proposition is that three components of digital platform architecture (core, interface, and complements) should collectively support the mix of the three distinct types of social interaction structures of online community (information sharing, collaboration, and collective action). We validate the initial propositions and generate additional insights by conducting an in-depth analysis of an European digital platform for elderly care assistance. We further validate the propositions by analyzing three widely used digital platforms, including Twitter, Wikipedia, and Liquidfeedback, and we derive additional propositions and insights that can guide DPsOC design. We discuss the implications of this research for research and practice. Journal of Information Technology advance online publication, 10 February 2015; doi:10.1057/jit.2014.37",
"title": ""
},
{
"docid": "b31bae9e7c95e070318df8279cdd18d5",
"text": "This article focuses on the ethical analysis of cyber warfare, the warfare characterised by the deployment of information and communication technologies. It addresses the vacuum of ethical principles surrounding this phenomenon by providing an ethical framework for the definition of such principles. The article is divided in three parts. The first one considers cyber warfare in relation to the so-called information revolution and provides a conceptual analysis of this kind of warfare. The second part focuses on the ethical problems posed by cyber warfare and describes the issues that arise when Just War Theory is endorsed to address them. The final part introduces Information Ethics as a suitable ethical framework for the analysis of cyber warfare, and argues that the vacuum of ethical principles for this kind warfare is overcome when Just War Theory and Information Ethics are merged together.",
"title": ""
},
{
"docid": "485f7998056ef7a30551861fad33bef4",
"text": "Research has shown close connections between personality and subjective well-being (SWB), suggesting that personality traits predispose individuals to experience different levels of SWB. Moreover, numerous studies have shown that self-efficacy is related to both personality factors and SWB. Extending previous research, we show that general self-efficacy functionally connects personality factors and two components of SWB (life satisfaction and subjective happiness). Our results demonstrate the mediating role of self-efficacy in linking personality factors and SWB. Consistent with our expectations, the influence of neuroticism, extraversion, openness, and conscientiousness on life satisfaction was mediated by self-efficacy. Furthermore, self-efficacy mediated the influence of openness and conscientiousness, but not that of neuroticism and extraversion, on subjective happiness. Results highlight the importance of cognitive beliefs in functionally linking personality traits and SWB.",
"title": ""
},
{
"docid": "2fb4fbd96c4da572ae008419b57458dd",
"text": "A main puzzle of deep networks revolves around the apparent absence of overfitting intended as robustness of the expected error against overparametrization, despite the large capacity demonstrated by zero training error on randomly labeled data. In this note, we show that the dynamics associated to gradient descent minimization of nonlinear networks is topologically equivalent, near the asymptotically stable minima of the empirical error, to a gradient system in a quadratic potential with a degenerate (for square loss) or almost degenerate (for logistic or crossentropy loss) Hessian. The proposition depends on the qualitative theory of dynamical systems and is supported by numerical results. The result extends to deep nonlinear networks two key properties of gradient descent for linear networks, that have been recently recognized (1) to provide a form of implicit regularization: 1. For classification, which is the main application of today’s deep networks, there is asymptotic convergence to the maximum margin solution by minimization of loss functions such as the logistic, the cross entropy and the exp-loss . The maximum margin solution guarantees good classification error for “low noise” datasets. Importantly, this property holds independently of the initial conditions. Because of this property, our proposition guarantees a maximum margin solution also for deep nonlinear networks. 2. Gradient descent enforces a form of implicit regularization controlled by the number of iterations, and asymptotically converges to the minimum norm solution for appropriate initial conditions of gradient descent. This implies that there is usually an optimum early stopping that avoids overfitting of the expected risk. This property, valid for the square loss and many other loss functions, is relevant especially for regression. In the case of deep nonlinear networks the solution however is not expected to be strictly minimum norm, unlike the linear case. The robustness to overparametrization has suggestive implications for the robustness of the architecture of deep convolutional networks with respect to the curse of dimensionality.",
"title": ""
},
{
"docid": "c4badf270d5f5c056aa869af51aeb043",
"text": "This paper deals with a creation of the RGB-D database by using Microsoft Kinect device. One of the main uses of Kinect is measurement and subsequent creation the so-called depth maps of the 3D scenes. The maps obtained by Kinect can be improved. Existence of databases suitable for the experiment is very important for research. One of the possible research directions is use of infrared version of the investigated scene for improvement of the depth map. However, the databases of the Kinect data which would contain the corresponding infrared images do not exist. Therefore, our aim was to create such database. We want to increase the usability of the database by adding stereo images. Moreover, the same scenes were captured by Kinect v2. It was also investigated the impact of simultaneous use Kinect v1 and Kinect v2 to improve depth map investigated the scene. The database contains sequences of objects on turntable and simple scenes containing several objects.",
"title": ""
},
{
"docid": "16e1174454d62c69d831effce532bcad",
"text": "We report on the quantitative determination of acetaminophen (paracetamol; NAPAP-d(0)) in human plasma and urine by GC-MS and GC-MS/MS in the electron-capture negative-ion chemical ionization (ECNICI) mode after derivatization with pentafluorobenzyl (PFB) bromide (PFB-Br). Commercially available tetradeuterated acetaminophen (NAPAP-d(4)) was used as the internal standard. NAPAP-d(0) and NAPAP-d(4) were extracted from 100-μL aliquots of plasma and urine with 300 μL ethyl acetate (EA) by vortexing (60s). After centrifugation the EA phase was collected, the solvent was removed under a stream of nitrogen gas, and the residue was reconstituted in acetonitrile (MeCN, 100 μL). PFB-Br (10 μL, 30 vol% in MeCN) and N,N-diisopropylethylamine (10 μL) were added and the mixture was incubated for 60 min at 30 °C. Then, solvents and reagents were removed under nitrogen and the residue was taken up with 1000 μL of toluene, from which 1-μL aliquots were injected in the splitless mode. GC-MS quantification was performed by selected-ion monitoring ions due to [M-PFB](-) and [M-PFB-H](-), m/z 150 and m/z 149 for NAPAP-d(0) and m/z 154 and m/z 153 for NAPAP-d(4), respectively. GC-MS/MS quantification was performed by selected-reaction monitoring the transition m/z 150 → m/z 107 and m/z 149 → m/z 134 for NAPAP-d(0) and m/z 154 → m/z 111 and m/z 153 → m/z 138 for NAPAP-d(4). The method was validated for human plasma (range, 0-130 μM NAPAP-d(0)) and urine (range, 0-1300 μM NAPAP-d(0)). Accuracy (recovery, %) ranged between 89 and 119%, and imprecision (RSD, %) was below 19% in these matrices and ranges. A close correlation (r>0.999) was found between the concentrations measured by GC-MS and GC-MS/MS. By this method, acetaminophen can be reliably quantified in small plasma and urine sample volumes (e.g., 10 μL). The analytical performance of the method makes it especially useful in pediatrics.",
"title": ""
},
{
"docid": "874dd5c2b3b3edc0d13aac33b60da21f",
"text": "Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts. Our system then reconstructed an image of a new flammable object. Finally, the reconstructed image was superimposed on the input image to provide \"transparency\". The system imitates human learning about the laws of physics through experience by learning the shape of flammable objects and the flame characteristics.",
"title": ""
},
{
"docid": "177d8229b6e4649de33699d1bfcb1be8",
"text": "A novel transition from rectangular waveguide to differential microstrip lines is illustrated in this paper. It transfers the dominant TE10 mode signal in a rectangular waveguide to a differential mode signal in the coupled microstrip lines. The common mode signal in the coupled microstrip lines is highly rejected. The transition was designed at 75 GHz, which is the center frequency of E band and simulated by a 3D EM simulator. It has a wide bandwidth of 19 GHz for -15 dB return loss of the waveguide port. Several prototypes of the transitions were fabricated and measured. The measurement results agree very well with the simulation. The compact size and the simple fabrication enable the transition to be employed in a number of millimeter-wave applications.",
"title": ""
},
{
"docid": "3df95e4b2b1bb3dc80785b25c289da92",
"text": "The problem of efficiently locating previously known patterns in a time series database (i.e., query by content) has received much attention and may now largely be regarded as a solved problem. However, from a knowledge discovery viewpoint, a more interesting problem is the enumeration of previously unknown, frequently occurring patterns. We call such patterns “motifs”, because of their close analogy to their discrete counterparts in computation biology. An efficient motif discovery algorithm for time series would be useful as a tool for summarizing and visualizing massive time series databases. In addition it could be used as a subroutine in various other data mining tasks, including the discovery of association rules, clustering and classification. In this work we carefully motivate, then introduce, a nontrivial definition of time series motifs. We propose an efficient algorithm to discover them, and we demonstrate the utility and efficiency of our approach on several real world datasets.",
"title": ""
},
{
"docid": "b5e603ef5cae02919f7574d07347db38",
"text": "In this paper, we propose a novel approach for traffic accident anticipation through (i) Adaptive Loss for Early Anticipation (AdaLEA) and (ii) a large-scale self-annotated incident database for anticipation. The proposed AdaLEA allows a model to gradually learn an earlier anticipation as training progresses. The loss function adaptively assigns penalty weights depending on how early the model can anticipate a traffic accident at each epoch. Additionally, we construct a Near-miss Incident DataBase for anticipation. This database contains an enormous number of traffic near-miss incident videos and annotations for detail evaluation of two tasks, risk anticipation and risk-factor anticipation. In our experimental results, we found our proposal achieved the highest scores for risk anticipation (+6.6% better on mean average precision (mAP) and 2.36 sec earlier than previous work on the average time-to-collision (ATTC)) and risk-factor anticipation (+4.3% better on mAP and 0.70 sec earlier than previous work on ATTC).",
"title": ""
},
{
"docid": "7f253cfcfc6f2ef662782ce7a8bb7e9e",
"text": "In order to cope with real-world problems more effectively, we tend to design a decision support system for tuberculosis bacterium class identification. In this paper, we are concerned to propose a fuzzy diagnosability approach, which takes value between {0, 1} and based on observability of events, we formalized the construction of diagnoses that are used to perform diagnosis. In particular, we present a framework of the fuzzy expert system; discuss the suitability of artificial intelligence as a novel soft paradigm and reviews work from the literature for the development of a medical diagnostic system. The newly proposed approach allows us to deal with problems of diagnosability for both crisp and fuzzy value of input data. Accuracy analysis of designed decision support system based on demographic data was done by comparing expert knowledge and system generated response. This basic emblematic approach using fuzzy inference system is presented that describes a technique to forecast the existence of bacterium and provides support platform to pulmonary researchers in identifying the ailment effectively.",
"title": ""
},
{
"docid": "cd16a2df18ca2667da9b05b3417ecbc4",
"text": "Social network sites (SNS) have attracted considerable attention among teens and young adults who tend to connect and share common interest. Despite this popularity, the issue of students’ adoption of social network sites is still being unexplored fully in Malaysia. Driven by this factor, this study was designed to analyze the impact of social network sites on students’ academic performance in Malaysia. Using a conceptual approach, the study gathered that more students prefer the use of Facebook and Twitter in academic related discussions in complementingconventional classroom teaching and learning process. Thus, it is imperative that lecturers and academic institutions should implement the use of these applications in promoting academic excellence. As for profit oriented organizations such as bookshops, computer and smartphoneone vendors, they can promote their products through these applications and engage students to make purchases via them having understood that many students prefer and use Facebook, Twitter and Google+. The discussion from this study however does not represent the general sampling of Malaysian university students.",
"title": ""
},
{
"docid": "560b1d80377210ae6f60d375fa97560e",
"text": "We present the design and evaluation of a multi-articular soft exosuit that is portable, fully autonomous, and provides assistive torques to the wearer at the ankle and hip during walking. Traditional rigid exoskeletons can be challenging to perfectly align with a wearer’s biological joints and can have large inertias, which can lead to the wearer altering their natural motion patterns. Exosuits, in comparison, use textiles to create tensile forces over the body in parallel with the muscles, enabling them to be light and not restrict the wearer’s kinematics. We describe the biologically inspired design and function of our exosuit, including a simplified model of the suit’s architecture and its interaction with the body. A key feature of the exosuit is that it can generate forces passively due to the body’s motion, similar to the body’s ligaments and tendons. These passively-generated forces can be supplemented by actively contracting Bowden cables using geared electric motors, to create peak forces in the suit of up to 200N. We define the suit-human series stiffness as an important parameter in the design of the exosuit and measure it on several subjects, and we perform human subjects testing to determine the biomechanical and physiological effects of the suit. Results from a five-subject study showed a minimal effect on gait kinematics and an average best-case metabolic reduction of 6.4%, comparing suit worn unpowered vs powered, during loaded walking with 34.6kg of carried mass including the exosuit and actuators (2.0kg on both legs, 10.1kg total).",
"title": ""
},
{
"docid": "346ce9d0377f94f268479d578b700e9c",
"text": "From a system architecture perspective, 3D technology can satisfy the high memory bandwidth demands that future multicore/manycore architectures require. This article presents a 3D DRAM architecture design and the potential for using 3D DRAM stacking for both L2 cache and main memory in 3D multicore architecture.",
"title": ""
},
{
"docid": "404f1c68c097c74b120189af67bf00f5",
"text": "In 1991, a novel robot, MIT-MANUS, was introduced to study the potential that robots might assist in and quantify the neuro-rehabilitation of motor function. MIT-MANUS proved an excellent tool for shoulder and elbow rehabilitation in stroke patients, showing in clinical trials a reduction of impairment in movements confined to the exercised joints. This successful proof of principle as to additional targeted and intensive movement treatment prompted a test of robot training examining other limb segments. This paper focuses on a robot for wrist rehabilitation designed to provide three rotational degrees-of-freedom. The first clinical trial of the device will enroll 200 stroke survivors. Ultimately 160 stroke survivors will train with both the proximal shoulder and elbow MIT-MANUS robot, as well as with the novel distal wrist robot, in addition to 40 stroke survivor controls. So far 52 stroke patients have completed the robot training (ongoing protocol). Here, we report on the initial results on 36 of these volunteers. These results demonstrate that further improvement should be expected by adding additional training to other limb segments.",
"title": ""
},
{
"docid": "2cc7019de113899274080f538de0540c",
"text": "Chitosan was prepared from shrimp processing waste (shell) using the same chemical process as described for the other crustacean species with minor modification in the treatment condition. The physicochemical properties, molecular weight (165394g/mole), degree of deacetylation (75%), ash content as well as yield (15%) of prepared chitosan indicated that shrimp processing waste (shell) are a good source of chitosan. The water binding capacity (502%) and fat binding capacity (370%) of prepared chitosan are good agreement with the commercial chitosan. FT-IR spectra gave characteristics bands of –NH2 at 3443cm -1 and carbonyl at 1733cm. X-ray diffraction (XRD) patterns also indicated two characteristics crystalline peaks approximately at 10° and 20° (2θ).The surface morphology was examined using scanning electron microscopy (SEM). Index Term-Shrimp waste, Chitin, Deacetylation, Chitosan,",
"title": ""
},
{
"docid": "270e593aa89fb034d0de977fe6d618b2",
"text": "According to the website AcronymFinder.com which is one of the world's largest and most comprehensive dictionaries of acronyms, an average of 37 new human-edited acronym definitions are added every day. There are 379,918 acronyms with 4,766,899 definitions on that site up to now, and each acronym has 12.5 definitions on average. It is a very important research topic to identify what exactly an acronym means in a given context for document comprehension as well as for document retrieval. In this paper, we propose two word embedding based models for acronym disambiguation. Word embedding is to represent words in a continuous and multidimensional vector space, so that it is easy to calculate the semantic similarity between words by calculating the vector distance. We evaluate the models on MSH Dataset and ScienceWISE Dataset, and both models outperform the state-of-art methods on accuracy. The experimental results show that word embedding helps to improve acronym disambiguation.",
"title": ""
},
{
"docid": "79934e1cb9a6c07fb965da9674daeb69",
"text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.",
"title": ""
},
{
"docid": "6f3223a26959bd80e7ec73700a232657",
"text": "Question answering over knowledge graph (QA-KG) aims to use facts in the knowledge graph (KG) to answer natural language questions. It helps end users more efficiently and more easily access the substantial and valuable knowledge in the KG, without knowing its data structures. QA-KG is a nontrivial problem since capturing the semantic meaning of natural language is difficult for a machine. Meanwhile, many knowledge graph embedding methods have been proposed. The key idea is to represent each predicate/entity as a low-dimensional vector, such that the relation information in the KG could be preserved. The learned vectors could benefit various applications such as KG completion and recommender systems. In this paper, we explore to use them to handle the QA-KG problem. However, this remains a challenging task since a predicate could be expressed in different ways in natural language questions. Also, the ambiguity of entity names and partial names makes the number of possible answers large. To bridge the gap, we propose an effective Knowledge Embedding based Question Answering (KEQA) framework. We focus on answering the most common types of questions, i.e., simple questions, in which each question could be answered by the machine straightforwardly if its single head entity and single predicate are correctly identified. To answer a simple question, instead of inferring its head entity and predicate directly, KEQA targets at jointly recovering the question's head entity, predicate, and tail entity representations in the KG embedding spaces. Based on a carefully-designed joint distance metric, the three learned vectors' closest fact in the KG is returned as the answer. Experiments on a widely-adopted benchmark demonstrate that the proposed KEQA outperforms the state-of-the-art QA-KG methods.",
"title": ""
},
{
"docid": "8a243ba8bb385373230719f733fb947b",
"text": "The insider threat is one of the most pernicious in computer security. Traditional approaches typically instrument systems with decoys or intrusion detection mechanisms to detect individuals who abuse their privileges (the quintessential \"insider\"). Such an attack requires that these agents have access to resources or data in order to corrupt or disclose them. In this work, we examine the application of process modeling and subsequent analyses to the insider problem. With process modeling, we first describe how a process works in formal terms. We then look at the agents who are carrying out particular tasks, perform different analyses to determine how the process can be compromised, and suggest countermeasures that can be incorporated into the process model to improve its resistance to insider attack.",
"title": ""
}
] | scidocsrr |
4e583fb9f1c2d96a77cfcb6e7bdf8715 | Impedance Measurement System for Determination of Capacitive Electrode Coupling | [
{
"docid": "8cfdd59ba7271d48ea0d41acc2ef795a",
"text": "The Cole single-dispersion impedance model is based upon a constant phase element (CPE), a conductance parameter as a dependent parameter and a characteristic time constant as an independent parameter. Usually however, the time constant of tissue or cell suspensions is conductance dependent, and so the Cole model is incompatible with general relaxation theory and not a model of first choice. An alternative model with conductance as a free parameter influencing the characteristic time constant of the biomaterial has been analyzed. With this free-conductance model it is possible to separately follow CPE and conductive processes, and the nominal time constant no longer corresponds to the apex of the circular arc in the complex plane.",
"title": ""
}
] | [
{
"docid": "cf5d0f7079bd7bc1a197573e28b5569a",
"text": "More and more people rely on mobile devices to access the Internet, which also increases the amount of private information that can be gathered from people's devices. Although today's smartphone operating systems are trying to provide a secure environment, they fail to provide users with adequate control over and visibility into how third-party applications use their private data. Whereas there are a few tools that alert users when applications leak private information, these tools are often hard to use by the average user or have other problems. To address these problems, we present PrivacyGuard, an open-source VPN-based platform for intercepting the network traffic of applications. PrivacyGuard requires neither root permissions nor any knowledge about VPN technology from its users. PrivacyGuard does not significantly increase the trusted computing base since PrivacyGuard runs in its entirety on the local device and traffic is not routed through a remote VPN server. We implement PrivacyGuard on the Android platform by taking advantage of the VPNService class provided by the Android SDK.\n PrivacyGuard is configurable, extensible, and useful for many different purposes. We investigate its use for detecting the leakage of multiple types of sensitive data, such as a phone's IMEI number or location data. PrivacyGuard also supports modifying the leaked information and replacing it with crafted data for privacy protection. According to our experiments, PrivacyGuard can detect more leakage incidents by applications and advertisement libraries than TaintDroid. We also demonstrate that PrivacyGuard has reasonable overhead on network performance and almost no overhead on battery consumption.",
"title": ""
},
{
"docid": "93b87e8dde0de0c1b198f6a073858d80",
"text": "The current project is an initial attempt at validating the Virtual Reality Cognitive Performance Assessment Test (VRCPAT), a virtual environment-based measure of learning and memory. To examine convergent and discriminant validity, a multitrait-multimethod matrix was used in which we hypothesized that the VRCPAT's total learning and memory scores would correlate with other neuropsychological measures involving learning and memory but not with measures involving potential confounds (i.e., executive functions; attention; processing speed; and verbal fluency). Using a sequential hierarchical strategy, each stage of test development did not proceed until specified criteria were met. The 15-minute VRCPAT battery and a 1.5-hour in-person neuropsychological assessment were conducted with a sample of 30 healthy adults, between the ages of 21 and 36, that included equivalent distributions of men and women from ethnically diverse populations. Results supported both convergent and discriminant validity. That is, findings suggest that the VRCPAT measures a capacity that is (a) consistent with that assessed by traditional paper-and-pencil measures involving learning and memory and (b) inconsistent with that assessed by traditional paper-and-pencil measures assessing neurocognitive domains traditionally assumed to be other than learning and memory. We conclude that the VRCPAT is a valid test that provides a unique opportunity to reliably and efficiently study memory function within an ecologically valid environment.",
"title": ""
},
{
"docid": "ec6b1d26b06adc99092659b4a511da44",
"text": "Social identity threat is the notion that one of a person's many social identities may be at risk of being devalued in a particular context (C. M. Steele, S. J. Spencer, & J. Aronson, 2002). The authors suggest that in domains in which women are already negatively stereotyped, interacting with a sexist man can trigger social identity threat, undermining women's performance. In Study 1, male engineering students who scored highly on a subtle measure of sexism behaved in a dominant and sexually interested way toward an ostensible female classmate. In Studies 2 and 3, female engineering students who interacted with such sexist men, or with confederates trained to behave in the same way, performed worse on an engineering test than did women who interacted with nonsexist men. Study 4 replicated this finding and showed that women's underperformance did not extend to an English test, an area in which women are not negatively stereotyped. Study 5 showed that interacting with sexist men leads women to suppress concerns about gender stereotypes, an established mechanism of stereotype threat. Discussion addresses implications for social identity threat and for women's performance in school and at work.",
"title": ""
},
{
"docid": "ad11946cfb127e19b0ee80f5d77dbe93",
"text": "Air quality has great impact on individual and community health. In this demonstration, we present Citisense: a mobile air quality system that enables users to track their personal air quality exposure for discovery, self-reflection, and sharing within their local communities and online social networks.",
"title": ""
},
{
"docid": "10ae6cdb445e4faf1e6bed5cad6eb3ba",
"text": "It this paper we revisit the fast stylization method introduced in Ulyanov et al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will be made available at https://github.com/DmitryUlyanov/texture_nets.",
"title": ""
},
{
"docid": "0fd6b9eb35de8d91d28544920e525ee6",
"text": "A great many control schemes for a robot manipulator interacting with the environment have been developed in the literature in the past two decades. This paper is aimed at presenting a survey of robot interaction control schemes for a manipulator, the end effector of which comes in contact with a compliant surface. A salient feature of the work is the implementation of the schemes on an industrial robot with open control architecture equipped with a wrist force sensor. Two classes of control strategies are considered, namely, those based on static model-based compensation and those based on dynamic model-based compensation. The former provide a good steadystate behavior, while the latter enhance the behavior during the transient. The performance of the various schemes is compared in the light of disturbance rejection, and a thorough analysis is developed by means of a number of case studies.",
"title": ""
},
{
"docid": "2e5981a41d13ee2d588ee0e9fe04e1ec",
"text": "Malicious software (malware) has been extensively employed for illegal purposes and thousands of new samples are discovered every day. The ability to classify samples with similar characteristics into families makes possible to create mitigation strategies that work for a whole class of programs. In this paper, we present a malware family classification approach using VGG16 deep neural network’s bottleneck features. Malware samples are represented as byteplot grayscale images and the convolutional layers of a VGG16 deep neural network pre-trained on the ImageNet dataset is used for bottleneck features extraction. These features are used to train a SVM classifier for the malware family classification task. The experimental results on a dataset comprising 10,136 samples from 20 different families showed that our approach can effectively be used to classify malware families with an accuracy of 92.97%, outperforming similar approaches proposed in the literature which require feature engineering and considerable domain expertise.",
"title": ""
},
{
"docid": "945c5c7cd9eb2046c1b164e64318e52f",
"text": "This thesis explores the design and application of artificial immune systems (AISs), problem-solving systems inspired by the human and other immune systems. AISs to date have largely been modelled on the biological adaptive immune system and have taken little inspiration from the innate immune system. The first part of this thesis examines the biological innate immune system, which controls the adaptive immune system. The importance of the innate immune system suggests that AISs should also incorporate models of the innate immune system as well as the adaptive immune system. This thesis presents and discusses a number of design principles for AISs which are modelled on both innate and adaptive immunity. These novel design principles provided a structured framework for developing AISs which incorporate innate and adaptive immune systems in general. These design principles are used to build a software system which allows such AISs to be implemented and explored. AISs, as well as being inspired by the biological immune system, are also built to solve problems. In this thesis, using the software system and design principles we have developed, we implement several novel AISs and apply them to the problem of detecting attacks on computer systems. These AISs monitor programs running on a computer and detect whether the program is behaving abnormally or being attacked. The development of these AISs shows in more detail how AISs built on the design principles can be instantiated. In particular, we show how the use of AISs which incorporate both innate and adaptive immune system mechanisms can be used to reduce the number of false alerts and improve the performance of current approaches.",
"title": ""
},
{
"docid": "08f26c702f7d0bb5e21b51d7681869a2",
"text": "Millions of posts are being generated in real-time by users in social networking services, such as Twitter. However, a considerable number of those posts are mundane posts that are of interest to the authors and possibly their friends only. This paper investigates the problem of automatically discovering valuable posts that may be of potential interest to a wider audience. Specifically, we model the structure of Twitter as a graph consisting of users and posts as nodes and retweet relations between the nodes as edges. We propose a variant of the HITS algorithm for producing a static ranking of posts. Experimental results on real world data demonstrate that our method can achieve better performance than several baseline methods.",
"title": ""
},
{
"docid": "878bdefc419be3da8d9e18111d26a74f",
"text": "PURPOSE\nTo estimate prevalence and chronicity of insomnia and the impact of chronic insomnia on health and functioning of adolescents.\n\n\nMETHODS\nData were collected from 4175 youths 11-17 at baseline and 3134 a year later sampled from managed care groups in a large metropolitan area. Insomnia was assessed by youth-reported DSM-IV symptom criteria. Outcomes are three measures of somatic health, three measures of mental health, two measures of substance use, three measures of interpersonal problems, and three of daily activities.\n\n\nRESULTS\nOver one-fourth reported one or more symptoms of insomnia at baseline and about 5% met diagnostic criteria for insomnia. Almost 46% of those who reported one or more symptoms of insomnia in Wave 1 continued to be cases at Wave 2 and 24% met DSM-IV symptom criteria for chronic insomnia (cases in Wave 1 were also cases in Wave 2). Multivariate analyses found chronic insomnia increased subsequent risk for somatic health problems, interpersonal problems, psychological problems, and daily activities. Significant odds (p < .05) ranged from 1.6 to 5.6 for poor outcomes. These results are the first reported on chronic insomnia among youths, and corroborate, using prospective data, previous findings on correlates of disturbed sleep based on cross-sectional studies.\n\n\nCONCLUSIONS\nInsomnia is both common and chronic among adolescents. The data indicate that the burden of insomnia is comparable to that of other psychiatric disorders such as mood, anxiety, disruptive, and substance use disorders. Chronic insomnia severely impacts future health and functioning of youths. Those with chronic insomnia are more likely to seek medical care. These data suggest primary care settings might provide a venue for screening and early intervention for adolescent insomnia.",
"title": ""
},
{
"docid": "cccc206a025f6ae2a47a4068b6ded4c6",
"text": "Most existing methods for audio sentiment analysis use automatic speech recognition to convert speech to text, and feed the textual input to text-based sentiment classifiers. This study shows that such methods may not be optimal, and proposes an alternate architecture where a single keyword spotting system (KWS) is developed for sentiment detection. In the new architecture, the text-based sentiment classifier is utilized to automatically determine the most powerful sentiment-bearing terms, which is then used as the term list for KWS. In order to obtain a compact yet powerful term list, a new method is proposed to reduce text-based sentiment classifier model complexity while maintaining good classification accuracy. Finally, the term list information is utilized to build a more focused language model for the speech recognition system. The result is a single integrated solution which is focused on vocabulary that directly impacts classification. The proposed solution is evaluated on videos from YouTube.com and UT-Opinion corpus (which contains naturalistic opinionated audio collected in real-world conditions). Our experimental results show that the KWS based system significantly outperforms the traditional architecture in difficult practical tasks.",
"title": ""
},
{
"docid": "51c8570d20a43ed923cfa884b55df8c9",
"text": "Electricity is a non-storable commodity for consumers, while hydropower producers may store future electricity as water in their reservoirs. Consequently, there is an asymmetry between producers’ and consumers’ possibilities of spot-futures arbitrage. Furthermore, marginal warehousing costs in hydro based electricity production are zero as long as water reservoirs are not full, jumping to the prevailing spot price in the case that dams are filled up and water is running over the edge without being utilised. In this explorative study, we analyse price relationships at the world’s largest multinational market place for electricity (Nord Pool). We find tha the futures price at Nord Pool periodically has been outside its (theoretical) arbitrage limits. Furthermore, the futures price and the basis have been biased and poor predictors of subsequent spot price levels and changes, respectively. Forecast errors have been systematic, and the futures price does not seem to incorporate available information. The findings indicate non-rational pricing behaviour. Alternatively, the results may represent circumstantial evidence of market power on the producer side.",
"title": ""
},
{
"docid": "56f18b39a740dd65fc2907cdef90ac99",
"text": "This paper describes a dynamic artificial neural network based mobile robot motion and path planning system. The method is able to navigate a robot car on flat surface among static and moving obstacles, from any starting point to any endpoint. The motion controlling ANN is trained online with an extended backpropagation through time algorithm, which uses potential fields for obstacle avoidance. The paths of the moving obstacles are predicted with other ANNs for better obstacle avoidance. The method is presented through the realization of the navigation system of a mobile robot.",
"title": ""
},
{
"docid": "205a5a9a61b6ac992f01c8c2fc09678a",
"text": "We present the OWL API, a high level Application Programming Interface (API) for working with OWL ontologies. The OWL API is closely aligned with the OWL 2 structural specification. It supports parsing and rendering in the syntaxes defined in the W3C specification (Functional Syntax, RDF/XML, OWL/XML and the Manchester OWL Syntax); manipulation of ontological structures; and the use of reasoning engines. The reference implementation of the OWL API, written in Java, includes validators for the various OWL 2 profiles OWL 2 QL, OWL 2 EL and OWL 2 RL. The OWL API has widespread usage in a variety of tools and applications.",
"title": ""
},
{
"docid": "6df423e9d21b6505b8205792f6cd5f85",
"text": "The effective use of technologies supporting decision making is essential to companies’ survival. Recent studies analyzed social media technologies (SMT) in the context of smalland mediumsized enterprises (SMEs), contributing to the discussion on SMT benefits from the marketing perspective. This article focuses on the effects of SMT use on innovation. Our findings provide empirical evidence on the positive effects of SMT use for acquiring external information and for sharing knowledge and innovation performance.",
"title": ""
},
{
"docid": "dcd919590e0b6b52ea3a6be7378d5d25",
"text": "This work, concerning paraphrase identification task, on one hand contributes to expanding deep learning embeddings to include continuous and discontinuous linguistic phrases. On the other hand, it comes up with a new scheme TF-KLD-KNN to learn the discriminative weights of words and phrases specific to paraphrase task, so that a weighted sum of embeddings can represent sentences more effectively. Based on these two innovations we get competitive state-of-the-art performance on paraphrase identification.",
"title": ""
},
{
"docid": "a3fe8cf8b2689269fe8a1050cf7789d2",
"text": "A boosting algorithm, AdaBoost.RT, is proposed for regression problems. The idea is to filter out examples with a relative estimation error that is higher than the pre-set threshold value, and then follow the AdaBoost procedure. Thus it requires to select the sub-optimal value of relative error threshold to demarcate predictions from the predictor as correct or incorrect. Some experimental results using the M5 model tree as a weak learning machine for benchmark data sets and for hydrological modeling are reported, and compared to other boosting methods, bagging and artificial neural networks, and to a single M5 model tree. AdaBoost.Rt is proved to perform better on most of the considered data sets.",
"title": ""
},
{
"docid": "170e7a72a160951e880f18295d100430",
"text": "In this paper, we introduce an embedding model, named CapsE, exploring a capsule network to model relationship triples (subject, relation, object). Our CapsE represents each triple as a 3-column matrix where each column vector represents the embedding of an element in the triple. This 3-column matrix is then fed to a convolution layer where multiple filters are operated to generate different feature maps. These feature maps are used to construct capsules in the first capsule layer. Capsule layers are connected via dynamic routing mechanism. The last capsule layer consists of only one capsule to produce a vector output. The length of this vector output is used to measure the plausibility of the triple. Our proposed CapsE obtains state-of-the-art link prediction results for knowledge graph completion on two benchmark datasets: WN18RR and FB15k-237, and outperforms strong search personalization baselines on SEARCH17 dataset.",
"title": ""
},
{
"docid": "d0623e90f8bce6818c6cb2f150757659",
"text": "In this paper, an efficient offline signature verification method based on an interval symbolic representation and a fuzzy similarity measure is proposed. In the feature extraction step, a set of local binary pattern-based features is computed from both the signature image and its under-sampled bitmap. Interval-valued symbolic data is then created for each feature in every signature class. As a result, a signature model composed of a set of interval values (corresponding to the number of features) is obtained for each individual’s handwritten signature class. A novel fuzzy similarity measure is further proposed to compute the similarity between a test sample signature and the corresponding interval-valued symbolic model for the verification of the test sample. To evaluate the proposed verification approach, a benchmark offline English signature data set (GPDS-300) and a large data set (BHSig260) composed of Bangla and Hindi offline signatures were used. A comparison of our results with some recent signature verification methods available in the literature was provided in terms of average error rate and we noted that the proposed method always outperforms when the number of training samples is eight or more.",
"title": ""
},
{
"docid": "b8b3761b658e37783afb1157ef0844b5",
"text": "Biometric recognition refers to the automated recognition of individuals based on their biological and behavioral characteristics such as fingerprint, face, iris, and voice. The first scientific paper on automated fingerprint matching was published by Mitchell Trauring in the journal Nature in 1963. The first objective of this paper is to document the significant progress that has been achieved in the field of biometric recognition in the past 50 years since Trauring’s landmark paper. This progress has enabled current state-of-the-art biometric systems to accurately recognize individuals based on biometric trait(s) acquired under controlled environmental conditions from cooperative users. Despite this progress, a number of challenging issues continue to inhibit the full potential of biometrics to automatically recognize humans. The second objective of this paper is to enlist such challenges, analyze the solutions proposed to overcome them, and highlight the research opportunities in this field. One of the foremost challenges is the design of robust algorithms for representing and matching biometric samples obtained from uncooperative subjects under unconstrained environmental conditions (e.g., recognizing faces in a crowd). In addition, fundamental questions such as the distinctiveness and persistence of biometric traits need greater attention. Problems related to the security of biometric data and robustness of the biometric system against spoofing and obfuscation attacks, also remain unsolved. Finally, larger system-level issues like usability, user privacy concerns, integration with the end application, and return on investment have not been adequately addressed. Unlocking the full potential of biometrics through inter-disciplinary research in the above areas will not only lead to widespread adoption of this promising technology, but will also result in wider user acceptance and societal impact. c © 2016 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | scidocsrr |
7e64e4d4a7a6540a565c08e05c87cde6 | Smart grid standards for home and building automation | [
{
"docid": "7edb8a803734f4eb9418b8c34b1bf07c",
"text": "Building automation systems (BAS) provide automatic control of the conditions of indoor environments. The historical root and still core domain of BAS is the automation of heating, ventilation and air-conditioning systems in large functional buildings. Their primary goal is to realize significant savings in energy and reduce cost. Yet the reach of BAS has extended to include information from all kinds of building systems, working toward the goal of \"intelligent buildings\". Since these systems are diverse by tradition, integration issues are of particular importance. When compared with the field of industrial automation, building automation exhibits specific, differing characteristics. The present paper introduces the task of building automation and the systems and communications infrastructure necessary to address it. Basic requirements are covered as well as standard application models and typical services. An overview of relevant standards is given, including BACnet, LonWorks and EIB/KNX as open systems of key significance in the building automation domain.",
"title": ""
}
] | [
{
"docid": "72f9891b711ebc261fc081a0b356c31b",
"text": "This paper presents a flat, high gain, wide scanning, broadband continuous transverse stub (CTS) array. The design procedure, the fabrication, and an exhaustive antenna characterization are described in details. The array comprises 16 radiating slots and is fed by a corporate-feed network in hollow parallel plate waveguide (PPW) technology. A pillbox-based linear source illuminates the corporate network and allows for beam steering. The antenna is designed by using an ad hoc mode matching code recently developed for CTS arrays, providing design guidelines. The assembly technique ensures the electrical contact among the various stages of the network without using any electromagnetic choke and any bonding process. The main beam of the antenna is mechanically steered over ±40° in elevation, by moving a compact horn within the focal plane of the pillbox feeding system. Excellent performances are achieved. The features of the beam are stable within the design 27.5-31 GHz band and beyond, in the entire Ka-band (26.5-40 GHz). An antenna gain of about 29 dBi is measured at broadside at 29.25 GHz and scan losses lower than 2 dB are reported at ±40°. The antenna efficiency exceeds 80% in the whole scan range. The very good agreement between measurements and simulations validates the design procedure. The proposed design is suitable for Satcom Ka-band terminals in moving platforms, e.g., trains and planes, and also for mobile ground stations, as a multibeam sectorial antenna.",
"title": ""
},
{
"docid": "78ffcec1e3d5164d7360aa8a93848fc4",
"text": "During a long period of time we are combating overfitting in the CNN training process with model regularization, including weight decay, model averaging, data augmentation, etc. In this paper, we present DisturbLabel, an extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration. Although it seems weird to intentionally generate incorrect training labels, we show that DisturbLabel prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets. To the best of our knowledge, DisturbLabel serves as the first work which adds noises on the loss layer. Meanwhile, DisturbLabel cooperates well with Dropout to provide complementary regularization functions. Experiments demonstrate competitive recognition results on several popular image recognition datasets.",
"title": ""
},
{
"docid": "5820a54cf9235a08fbf3d6221c42f1d0",
"text": "Restoring nasal lining is one of the essential parts during reconstruction of full-thickness defects of the nose. Without a sufficient nasal lining the whole reconstruction will fail. Nasal lining has to sufficiently cover the shaping subsurface framework. But in addition, lining must not compromise or even block nasal ventilation. This article demonstrates different possibilities of lining reconstruction. The use of composite grafts for small rim defects is described. The limits and technical components for application of skin grafts are discussed. Then the advantages and limitations of endonasal, perinasal, and hingeover flaps are demonstrated. Strategies to restore lining with one or two forehead flaps are presented. Finally, the possibilities and technical aspects to reconstruct nasal lining with a forearm flap are demonstrated. Technical details are explained by intraoperative pictures. Clinical cases are shown to illustrate the different approaches and should help to understand the process of decision making. It is concluded that although the lining cannot be seen after reconstruction of the cover it remains one of the key components for nasal reconstruction. When dealing with full-thickness nasal defects, there is no way to avoid learning how to restore nasal lining.",
"title": ""
},
{
"docid": "bcea969179b1701179dac2087e57e749",
"text": "We present a simple yet effective unsupervised domain adaptation method that can be generally applied for different NLP tasks. Our method uses unlabeled target domain instances to induce a set of instance similarity features. These features are then combined with the original features to represent labeled source domain instances. Using three NLP tasks, we show that our method consistently outperforms a few baselines, including SCL, an existing general unsupervised domain adaptation method widely used in NLP. More importantly, our method is very easy to implement and incurs much less computational cost than SCL.",
"title": ""
},
{
"docid": "93b880dbc635a49ffc7a9e6906b094f6",
"text": "Abstract machines provide a certain separation between platform-dependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the specific abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, program-independent profiling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/verification of time properties, certification of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization",
"title": ""
},
{
"docid": "0efe3ccc1c45121c5167d3792a7fcd25",
"text": "This paper addresses the motion planning problem while considering Human-Robot Interaction (HRI) constraints. The proposed planner generates collision-free paths that are acceptable and legible to the human. The method extends our previous work on human-aware path planning to cluttered environments. A randomized cost-based exploration method provides an initial path that is relevant with respect to HRI and workspace constraints. The quality of the path is further improved with a local path-optimization method. Simulation results on mobile manipulators in the presence of humans demonstrate the overall efficacy of the approach.",
"title": ""
},
{
"docid": "0d1f88dbd4a04748a83fe741a86518c1",
"text": "The focus of this paper is to investigate how writing computer programs can help children develop their storytelling and creative writing abilities. The process of writing a program---coding---has long been considered only in terms of computer science, but such coding is also reflective of the imaginative and narrative elements of fiction writing workshops. Writing to program can also serve as programming to write, in which a child learns the importance of sequence, structure, and clarity of expression---three aspects characteristic of effective coding and good storytelling alike. While there have been efforts examining how learning to write code can be facilitated by storytelling, there has been little exploration as to how such creative coding can also be directed to teach students about the narrative and storytelling process. Using the introductory programming language Scratch, this paper explores the potential of having children create their own digital stories with the software and how the narrative structure of these stories offers kids the opportunity to better understand the process of expanding an idea into the arc of a story.",
"title": ""
},
{
"docid": "0daa43669ae68a81e5eb71db900976c6",
"text": "Fertilizer plays an important role in maintaining soil fertility, increasing yields and improving harvest quality. However, a significant portion of fertilizers are lost, increasing agricultural cost, wasting energy and polluting the environment, which are challenges for the sustainability of modern agriculture. To meet the demands of improving yields without compromising the environment, environmentally friendly fertilizers (EFFs) have been developed. EFFs are fertilizers that can reduce environmental pollution from nutrient loss by retarding, or even controlling, the release of nutrients into soil. Most of EFFs are employed in the form of coated fertilizers. The application of degradable natural materials as a coating when amending soils is the focus of EFF research. Here, we review recent studies on materials used in EFFs and their effects on the environment. The major findings covered in this review are as follows: 1) EFF coatings can prevent urea exposure in water and soil by serving as a physical barrier, thereby reducing the urea hydrolysis rate and decreasing nitrogen oxide (NOx) and dinitrogen (N2) emissions, 2) EFFs can increase the soil organic matter content, 3) hydrogel/superabsorbent coated EFFs can buffer soil acidity or alkalinity and lead to an optimal pH for plants, and 4) hydrogel/superabsorbent coated EFFs can improve water-retention and water-holding capacity of soil. In conclusion, EFFs play an important role in enhancing nutrients efficiency and reducing environmental pollution.",
"title": ""
},
{
"docid": "e83ad9ba6d0d134b9691714fcdfe165e",
"text": "With the adoption of a globalized and distributed IC design flow, IP piracy, reverse engineering, and counterfeiting threats are becoming more prevalent. Logic obfuscation techniques including logic locking and IC camouflaging have been developed to address these emergent challenges. A major challenge for logic locking and camouflaging techniques is to resist Boolean satisfiability (SAT) based attacks that can circumvent state-of-the-art solutions within minutes. Over the past year, multiple SAT attack resilient solutions such as Anti-SAT and AND-tree insertion (ATI) have been presented. In this paper, we perform a security analysis of these countermeasures and show that they leave structural traces behind in their attempts to thwart the SAT attack. We present three attacks, namely “signal probability skew” (SPS) attack, “AppSAT guided removal (AGR) attack, and “sensitization guided SAT” (SGS) attack”, that can break Anti-SAT and ATI, within minutes.",
"title": ""
},
{
"docid": "0e1dfbbc366ae86a0bea1dad2a97d467",
"text": "The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.",
"title": ""
},
{
"docid": "70180fa9be4c8c87ce119772b2bcca23",
"text": "The energy domain currently struggles with radical legal and technological changes, such as, smart meters. This results in new use cases which can be implemented based on business process technology. Understanding and automating business processes requires to model and test them. However, existing process testing approaches frequently struggle with the testing of process resources, such as ERP systems, and negative testing. Hence, this work presents a toolchain which tackles that limitations. The approach uses an open source process engine to generate event logs and applies process mining techniques in a novel way.",
"title": ""
},
{
"docid": "6ab046862d1c5329b0538a85dd0b4ccd",
"text": "In this study, a photosynthesis-fermentation model was proposed to merge the positive aspects of autotrophs and heterotrophs. Microalga Chlorella protothecoides was grown autotrophically for CO(2) fixation and then metabolized heterotrophically for oil accumulation. Compared to typical heterotrophic metabolism, 69% higher lipid yield on glucose was achieved at the fermentation stage in the photosynthesis-fermentation model. An elementary flux mode study suggested that the enzyme Rubisco-catalyzed CO(2) re-fixation, enhancing carbon efficiency from sugar to oil. This result may explain the higher lipid yield. In this new model, 61.5% less CO(2) was released compared with typical heterotrophic metabolism. Immunoblotting and activity assay further showed that Rubisco functioned in sugar-bleaching cells at the fermentation stage. Overall, the photosynthesis-fermentation model with double CO(2) fixation in both photosynthesis and fermentation stages, enhances carbon conversion ratio of sugar to oil and thus provides an efficient approach for the production of algal lipid.",
"title": ""
},
{
"docid": "fd28f048f6ac4a7894022d0afee871f3",
"text": "Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches.",
"title": ""
},
{
"docid": "305dac2ffd4a04fa0ef9ca727edc6247",
"text": "A new control strategy for obtaining the maximum traction force of electric vehicles with individual rear-wheel drive is presented. A sliding-mode observer is proposed to estimate the wheel slip and vehicle velocity under unknown road conditions by measuring only the wheel speeds. The proposed observer is based on the LuGre dynamic friction model and allows the maximum transmissible torque for each driven wheel to be obtained instantaneously. The maximum torque can be determined at any operating point and road condition, thus avoiding wheel skid. The proposed strategy maximizes the traction force while avoiding tire skid by controlling the torque of each traction motor. Simulation results using a complete vehicle model under different road conditions are presented to validate the proposed strategy.",
"title": ""
},
{
"docid": "31d22f8a296b3054d1beff53a7a495a0",
"text": "Spectral Matching (SM) is a computationally efficient approach to approximate the solution of pairwise matching problems that are np-hard. In this paper, we present a probabilistic interpretation of spectral matching schemes and derive a novel Probabilistic Matching (PM) scheme that is shown to outperform previous approaches. We show that spectral matching can be interpreted as a Maximum Likelihood (ML) estimate of the assignment probabilities and that the Graduated Assignment (GA) algorithm can be cast as a Maximum a Posteriori (MAP) estimator. Based on this analysis, we derive a ranking scheme for spectral matchings based on their reliability, and propose a novel iterative probabilistic matching algorithm that relaxes some of the implicit assumptions used in prior works. We experimentally show our approaches to outperform previous schemes when applied to exhaustive synthetic tests as well as the analysis of real image sequences.",
"title": ""
},
{
"docid": "32e33ef33a9ac42b856d49b270113ba2",
"text": "Generalized frequency division multiplexing (GFDM) is a promising candidate waveform for next generation wireless communications systems. Unlike conventional orthogonal frequency division multiplexing (OFDM) based systems, it is a non-orthogonal waveform subject to inter-carrier and intersymbol interference. In multiple-input multiple-output (MIMO) systems, the additional inter-antenna interference also takes place. The presence of such three-dimensional interference challenges the receiver design. This paper addresses the MIMOGFDM channel estimation problem with the aid of known reference signals also referred as pilots. Specifically, the received signal is expressed as the joint effect of the pilot part, unknown data part and noise part. On top of this formulation, least squares (LS) and linear minimum mean square error (LMMSE) estimators are presented, while their performance is evaluated for various pilot arrangements.",
"title": ""
},
{
"docid": "44c2cfd9dfacee55c7ff4bdca45024cd",
"text": "An integrative computational methodology is developed for the management of nonpoint source pollution from watersheds. The associated decision support system is based on an interface between evolutionary algorithms (EAs) and a comprehensive watershed simulation model, and is capable of identifying optimal or near-optimal land use patterns to satisfy objectives. Specifically, a genetic algorithm (GA) is linked with the U.S. Department of Agriculture’s Soil and Water Assessment Tool (SWAT) for single objective evaluations, and a Strength Pareto Evolutionary Algorithm has been integrated with SWAT for multiobjective optimization. The model can be operated at a small spatial scale, such as a farm field, or on a larger watershed scale. A secondary model that also uses a GA is developed for calibration of the simulation model. Sensitivity analysis and parameterization are carried out in a preliminary step to identify model parameters that need to be calibrated. Application to a demonstration watershed located in Southern Illinois reveals the capability of the model in achieving its intended goals. However, the model is found to be computationally demanding as a direct consequence of repeated SWAT simulations during the search for favorable solutions. An artificial neural network (ANN) has been developed to mimic SWAT outputs and ultimately replace it during the search process. Replacement of SWAT by the ANN results in an 84% reduction in computational time required to identify final land use patterns. The ANN model is trained using a hybrid of evolutionary programming (EP) and the back propagation (BP) algorithms. The hybrid algorithm was found to be more effective and efficient than either EP or BP alone. Overall, this study demonstrates the powerful and multifaceted role that EAs and artificial intelligence techniques could play in solving the complex and realistic problems of environmental and water resources systems. CE Database subject headings: Algorithms; Neural networks; Watershed management; Pollution control; Calibration; Computation.",
"title": ""
},
{
"docid": "64fbd2207a383bc4b04c66e8ee867922",
"text": "Ultra compact, short pulse, high voltage, high current pulsers are needed for a variety of non-linear electrical and optical applications. With a fast risetime and short pulse width, these drivers are capable of producing sub-nanosecond electrical and thus optical pulses by gain switching semiconductor laser diodes. Gain-switching of laser diodes requires a sub-nanosecond pulser capable of driving a low output impedance (5 /spl Omega/ or less). Optical pulses obtained had risetimes as fast as 20 ps. The designed pulsers also could be used for triggering photo-conductive semiconductor switches (PCSS), gating high speed optical imaging systems, and providing electrical and optical sources for fast transient sensor applications. Building on concepts from Lawrence Livermore National Laboratory, the development of pulsers based on solid state avalanche transistors was adapted to drive low impedances. As each successive stage is avalanched in the circuit, the amount of overvoltage increases, increasing the switching speed and improving the turn on time of the output pulse at the final stage. The output of the pulser is coupled into the load using a Blumlein configuration.",
"title": ""
},
{
"docid": "59bf93d2242104de07a960e944838118",
"text": "Software requirements specifications (SRS) are usually validated by inspections, in which several reviewers read all or part of the specification and search for defects. We hypothesize that diflerent methods for conducting these searches may have significantly diflerent rat es of success. Using a controlled experiment, we show that a Scenario-based detection method, in which each reviewer executes a specific procedure to discover a particular class of defects has a higher defect detection rate than either Ad Hoc or Checklist methods. We describe the design, execution, and analysis of the expem”ment so others may reproduce it and test our results for diflerent kinds of software developments and different populations of software engineers.",
"title": ""
},
{
"docid": "9bd06a8a8c490cd8b686169d1a984a14",
"text": "This review of research explores characteristics associated with massive open online courses (MOOCs). Three key characteristics are revealed: varied definitions of openness, barriers to persistence, and a distinct structure that takes the form as one of two pedagogical approaches. The concept of openness shifts among different MOOCs, models, researchers, and facilitators. The high dropout rates show that the barriers to learning are a significant challenge. Research has focused on engagement, motivation, and presence to mitigate risks of learner isolation. The pedagogical structure of the connectivist MOOC model (cMOOC) incorporates a social, distributed, networked approach and significant learner autonomy that is geared towards adult lifelong learners interested in personal or professional development. This connectivist approach relates to situated and social learning theories such as social constructivism (Kop, 2011). By contrast, the design of the Stanford Artificial Intelligence (AI) model (xMOOC) uses conventional directed instruction in the context of formal postsecondary educational institutions. This traditional pedagogical approach is categorized as cognitive-behaviorist (Rodriguez, 2012). These two distinct MOOC models attract different audiences, use different learning approaches, and employ different teaching methods. The purpose of this review is to synthesize the research describing the phenomenon of MOOCs in informal and postsecondary online learning. Massive open online courses (MOOCs) are a relatively new phenomenon sweeping higher education. By definition, MOOCs take place online. They could be affiliated with a university, but not necessarily. They are larger than typical college classes, sometimes much larger. They are open, which has multiple meanings evident in this research. While the literature is growing on this topic, it is yet limited. Scholars are taking notice of the literature around MOOCs in all its forms from conceptual to technical. Conference proceedings and magazine articles make up the majority of literature on MOOCs (Liyanagunawardena, Adams, & Williams, 2013). In order to better understand the characteristics associated with MOOCs, this review of literature focuses solely on original research published in scholarly journals. This emphasis on peer-reviewed research is an essential first step to form a more critical and comprehensive perspective by tempering the media hype. While most of the early scholarly research examines aspects of the cMOOC model, much of the hype and controversy surrounds the scaling innovation of the xMOOC model in postsecondary learning contexts. Naidu (2013) calls out the massive open online repetitions of failed pedagogy (MOORFAPs) and forecasts a transformation to massive open online learning opportunities (MOOLOs). Informed educators will be better equipped to make evidence-based decisions, foster the positive growth of this innovation, and adapt it for their own unique contexts. This research synthesis is framed by a withinand Journal of Interactive Online Learning Kennedy 2 between-study literature analysis (Onwuegbuzie, Leech, & Collins, 2012) and situated within the context of online teaching and learning.",
"title": ""
}
] | scidocsrr |
3f9b8b4ca875a82e67d43abce5ceb17d | Computational modeling of synthetic microbial biofilms. | [
{
"docid": "78967df4396e6d3d430f6349386debe9",
"text": "High-performance computing has recently seen a surge of interest in heterogeneous systems, with an emphasis on modern Graphics Processing Units (GPUs). These devices offer tremendous potential for performance and efficiency in important large-scale applications of computational science. However, exploiting this potential can be challenging, as one must adapt to the specialized and rapidly evolving computing environment currently exhibited by GPUs. One way of addressing this challenge is to embrace better techniques and develop tools tailored to their needs. This article presents one simple technique, GPU run-time code generation (RTCG), along with PyCUDA and PyOpenCL, two open-source toolkits that support this technique. In introducing PyCUDA and PyOpenCL, this article proposes the combination of a dynamic, high-level scripting language with the massive performance of a GPU as a compelling two-tiered computing platform, potentially offering significant performance and productivity advantages over conventional single-tier, static systems. The concept of RTCG is simple and easily implemented using existing, robust infrastructure. Nonetheless it is powerful enough to support (and encourage) the creation of custom application-specific tools by its users. The premise of the paper is illustrated by a wide range of examples where the technique has been applied with considerable success.",
"title": ""
}
] | [
{
"docid": "c88490a2ebc4372b10e9abd2cbacd8ca",
"text": "The automatic system for voice pathology assessment is one of the active areas for researchers in the recent years due to its benefits to the clinicians and presence of a significant number of dysphonic patients around the globe. In this paper, a voice disorder detection system is developed to differentiate between a normal and pathological voice signal. The system is implemented by applying the local binary pattern (LBP) operator on Mel-weighted spectrum of a signal. The LBP is considered as one of the sophisticated techniques for the image processing. The technique also provided very good results for voice pathology detection during this study. The English voice disorder database MEEI is used to evaluate the performance of the developed system. The results of the LBP operator based system are compared with MFCC and found to be better than MFCC. Key-Words: LBP operator, MFCC, Vocal fold disorders, Sustained vowel, MEEI database, disorder detection system.",
"title": ""
},
{
"docid": "854d06ba08492ad68ea96c73908f81ca",
"text": "We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR100. Swapout samples from a rich set of architectures including dropout [20], stochastic depth [7] and residual architectures [5, 6] as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.",
"title": ""
},
{
"docid": "97c0dc54f51ebcfe041f18028a15c621",
"text": "Mobile learning or “m-learning” is the process of learning when learners are not at a fixed location or time and can exploit the advantage of learning opportunities using mobile technologies. Nowadays, speech recognition is being used in many mobile applications.!Speech recognition helps people to interact with the device as if were they talking to another person. This technology helps people to learn anything using computers by promoting self-study over extended periods of time. The objective of this study focuses on designing and developing a mobile application for the Arabic recognition of spoken Quranic verses. The application is suitable for Android-based devices. The application is called Say Quran and is available on Google Play Store. Moreover, this paper presents the results of a preliminary study to gather feedback from students regarding the developed application.",
"title": ""
},
{
"docid": "c2fe863aba72df9df8405329c36046b6",
"text": "Feature learning for 3D shapes is challenging due to the lack of natural paramterization for 3D surface models. We adopt the multi-view depth image representation and propose Multi-View Deep Extreme Learning Machine (MVD-ELM) to achieve fast and quality projective feature learning for 3D shapes. In contrast to existing multiview learning approaches, our method ensures the feature maps learned for different views are mutually dependent via shared weights and in each layer, their unprojections together form a valid 3D reconstruction of the input 3D shape through using normalized convolution kernels. These lead to a more accurate 3D feature learning as shown by the encouraging results in several applications. Moreover, the 3D reconstruction property enables clear visualization of the learned features, which further demonstrates the meaningfulness of our feature learning.",
"title": ""
},
{
"docid": "d563b025b084b53c30afba4211870f2d",
"text": "Collaborative filtering (CF) techniques recommend items to users based on their historical ratings. In real-world scenarios, user interests may drift over time since they are affected by moods, contexts, and pop culture trends. This leads to the fact that a user’s historical ratings comprise many aspects of user interests spanning a long time period. However, at a certain time slice, one user’s interest may only focus on one or a couple of aspects. Thus, CF techniques based on the entire historical ratings may recommend inappropriate items. In this paper, we consider modeling user-interest drift over time based on the assumption that each user has multiple counterparts over temporal domains and successive counterparts are closely related. We adopt the cross-domain CF framework to share the static group-level rating matrix across temporal domains, and let user-interest distribution over item groups drift slightly between successive temporal domains. The derived method is based on a Bayesian latent factor model which can be inferred using Gibbs sampling. Our experimental results show that our method can achieve state-of-the-art recommendation performance as well as explicitly track and visualize user-interest drift over time.",
"title": ""
},
{
"docid": "7ba4375393aac729b8f549c1e4109ec2",
"text": "Due to their capacity-achieving property, polar codes have become one of the most attractive channel codes. To date, the successive-cancellation list (SCL) decoding algorithm is the primary approach that can guarantee outstanding error-correcting performance of polar codes. However, the hardware designs of the original SCL decoder have a large silicon area and a long decoding latency. Although some recent efforts can reduce either the area or latency of SCL decoders, these two metrics still cannot be optimized at the same time. This brief, for the first time, proposes a general log-likelihood-ratio (LLR) based SCL decoding algorithm with multibit decision. This new algorithm, referred to as LLR - 2K b-SCL, can determine 2K bits simultaneously for arbitrary K with the use of LLR messages. In addition, a reduced-data-width scheme is presented to reduce the critical path of the sorting block. Then, based on the proposed algorithm, a VLSI architecture of the new SCL decoder is developed. Synthesis results show that, for an example (1024, 512) polar code with list size 4, the proposed LLR - 2K b - SCL decoders achieve a significant reduction in both area and latency as compared to prior works. As a result, the hardware efficiencies of the proposed designs with K = 2 and 3 are 2.33 times and 3.32 times of that of the state-of-the-art works, respectively.",
"title": ""
},
{
"docid": "18aeabe12c3f890b5aa6d5b1f6ded386",
"text": "Many stream-based applications have sophisticated data processing requirements and real-time performance expectations that need to be met under high-volume, time-varying data streams. In order to address these challenges, we propose novel operator scheduling approaches that specify (1) which operators to schedule (2) in which order to schedule the operators, and (3) how many tuples to process at each execution step. We study our approaches in the context of the Aurora data stream manager. We argue that a fine-grained scheduling approach in combination with various scheduling techniques (such as batching of operators and tuples) can significantly improve system efficiency by reducing various system overheads. We also discuss application-aware extensions that make scheduling decisions according to per-application Quality of Service (QoS) specifications. Finally, we present prototype-based experimental results that characterize the efficiency and effectiveness of our approaches under various stream workloads and processing scenarios.",
"title": ""
},
{
"docid": "9caecbf1b2fa9af51966223b83c14a80",
"text": "This paper is a work-in-progress account of ideas and propositions about resilience in socialecological systems. It articulates our understanding of how these complex systems change and what determines their ability to absorb disturbances in either their ecological or their social domains. We call them “propositions” because, although they are useful in helping us understand and compare different social-ecological systems, they are not sufficiently well defined to be considered formal hypotheses. These propositions were developed in two workshops, in 2003 and 2004, in which participants compared the dynamics of 15 case studies in a wide range of regions around the world. The propositions raise many questions, and we present a list of some that could help define the next phase of resilience-related research.",
"title": ""
},
{
"docid": "015e678d9195b96ac8b818a62613d9b9",
"text": "Information extraction and human collaboration techniques are widely applied in the construction of web-scale knowledge bases. However, these knowledge bases are often incomplete or uncertain. In this paper, we present ProbKB, a probabilistic knowledge base designed to infer missing facts in a scalable, probabilistic, and principled manner using a relational DBMS. The novel contributions we make to achieve scalability and high quality are: 1) We present a formal definition and a novel relational model for probabilistic knowledge bases. This model allows an efficient SQL-based inference algorithm for knowledge expansion that applies inference rules in batches; 2) We implement ProbKB on massive parallel processing databases to achieve further scalability; and 3) We combine several quality control methods that identify erroneous rules, facts, and ambiguous entities to improve the precision of inferred facts. Our experiments show that ProbKB system outperforms the state-of-the-art inference engine in terms of both performance and quality.",
"title": ""
},
{
"docid": "9a43476b4038e554c28e09bae9140e24",
"text": "The success of text-based retrieval motivates us to investigate analogous techniques which can support the querying and browsing of image data. However, images differ significantly from text both syntactically and semantically in their mode of representing and expressing information. Thus, the generalization of information retrieval from the text domain to the image domain is non-trivial. This paper presents a framework for information retrieval in the image domain which supports content-based querying and browsing of images. A critical first step to establishing such a framework is to construct a codebook of \"keywords\" for images which is analogous to the dictionary for text documents. We refer to such \"keywords\" in the image domain as \"keyblocks.\" In this paper, we first present various approaches to generating a codebook containing keyblocks at different resolutions. Then we present a keyblock-based approach to content-based image retrieval. In this approach, each image is encoded as a set of one-dimensional index codes linked to the keyblocks in the codebook, analogous to considering a text document as a linear list of keywords. Generalizing upon text-based information retrieval methods, we then offer various techniques for image-based information retrieval. By comparing the performance of this approach with conventional techniques using color and texture features, we demonstrate the effectiveness of the keyblock-based approach to content-based image retrieval.",
"title": ""
},
{
"docid": "e69ecf0d4d04a956b53f34673e353de3",
"text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.",
"title": ""
},
{
"docid": "4520316ecef3051305e547d50fadbb7a",
"text": "The increasing complexity and size of digital designs, in conjunction with the lack of a potent verification methodology that can effectively cope with this trend, continue to inspire engineers and academics in seeking ways to further automate design verification. In an effort to increase performance and to decrease engineering effort, research has turned to artificial intelligence (AI) techniques for effective solutions. The generation of tests for simulation-based verification can be guided by machine-learning techniques. In fact, recent advances demonstrate that embedding machine-learning (ML) techniques into a coverage-directed test generation (CDG) framework can effectively automate the test generation process, making it more effective and less error-prone. This article reviews some of the most promising approaches in this field, aiming to evaluate the approaches and to further stimulate more directed research in this area.",
"title": ""
},
{
"docid": "c29349c32074392e83f51b1cd214ec8a",
"text": "Recent work has shown that optical flow estimation can be formulated as a supervised learning task and can be successfully solved with convolutional networks. Training of the so-called FlowNet was enabled by a large synthetically generated dataset. The present paper extends the concept of optical flow estimation via convolutional networks to disparity and scene flow estimation. To this end, we propose three synthetic stereo video datasets with sufficient realism, variation, and size to successfully train large networks. Our datasets are the first large-scale datasets to enable training and evaluation of scene flow methods. Besides the datasets, we present a convolutional network for real-time disparity estimation that provides state-of-the-art results. By combining a flow and disparity estimation network and training it jointly, we demonstrate the first scene flow estimation with a convolutional network.",
"title": ""
},
{
"docid": "48c03a33c5d34b246dce4932ef0fa16e",
"text": "We present a solution to “Google Cloud and YouTube8M Video Understanding Challenge” that ranked 5th place. The proposed model is an ensemble of three model families, two frame level and one video level. The training was performed on augmented dataset, with cross validation.",
"title": ""
},
{
"docid": "43c49bb7d9cebb8f476079ac9dd0af27",
"text": "Nowadays, most recommender systems (RSs) mainly aim to suggest appropriate items for individuals. Due to the social nature of human beings, group activities have become an integral part of our daily life, thus motivating the study on group RS (GRS). However, most existing methods used by GRS make recommendations through aggregating individual ratings or individual predictive results rather than considering the collective features that govern user choices made within a group. As a result, such methods are heavily sensitive to data, hence they often fail to learn group preferences when the data are slightly inconsistent with predefined aggregation assumptions. To this end, we devise a novel GRS approach which accommodates both individual choices and group decisions in a joint model. More specifically, we propose a deep-architecture model built with collective deep belief networks and dual-wing restricted Boltzmann machines. With such a deep model, we can use high-level features, which are induced from lower-level features, to represent group preference so as to relieve the vulnerability of data. Finally, the experiments conducted on a real-world dataset prove the superiority of our deep model over other state-of-the-art methods.",
"title": ""
},
{
"docid": "fcf01af44da0c796cdaf02c8e05a0fd3",
"text": "As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues, and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, socialaware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test is introduced as well.",
"title": ""
},
{
"docid": "01a57e4a8bcc91fd5d172280a6b47577",
"text": "Recommendation System Using Collaborative Filtering by Yunkyoung Lee Collaborative filtering is one of the well known and most extensive techniques in recommendation system its basic idea is to predict which items a user would be interested in based on their preferences. Recommendation systems using collaborative filtering are able to provide an accurate prediction when enough data is provided, because this technique is based on the user’s preference. User-based collaborative filtering has been very successful in the past to predict the customer’s behavior as the most important part of the recommendation system. However, their widespread use has revealed some real challenges, such as data sparsity and data scalability, with gradually increasing the number of users and items. To improve the execution time and accuracy of the prediction problem, this paper proposed item-based collaborative filtering applying dimension reduction in a recommendation system. It demonstrates that the proposed approach can achieve better performance and execution time for the recommendation system in terms of existing challenges, according to evaluation metrics using Mean Absolute Error (MAE).",
"title": ""
},
{
"docid": "b425265606966c9490519ab1d49f8141",
"text": "Any books that you read, no matter how you got the sentences that have been read from the books, surely they will give you goodness. But, we will show you one of recommendation of the book that you need to read. This web usability a user centered design approach is what we surely mean. We will show you the reasonable reasons why you need to read this book. This book is a kind of precious book written by an experienced author.",
"title": ""
},
{
"docid": "f53d13eeccff0048fc96e532a52a2154",
"text": "The physical principles underlying some current biomedical applications of magnetic nanoparticles are reviewed. Starting from well-known basic concepts, and drawing on examples from biology and biomedicine, the relevant physics of magnetic materials and their responses to applied magnetic fields are surveyed. The way these properties are controlled and used is illustrated with reference to (i) magnetic separation of labelled cells and other biological entities; (ii) therapeutic drug, gene and radionuclide delivery; (iii) radio frequency methods for the catabolism of tumours via hyperthermia; and (iv) contrast enhancement agents for magnetic resonance imaging applications. Future prospects are also discussed.",
"title": ""
},
{
"docid": "2f7b0229fc9e126e09abe769d2b927dc",
"text": "Complex event processing has become increasingly important in modern applications, ranging from supply chain management for RFID tracking to real-time intrusion detection. The goal is to extract patterns from such event streams in order to make informed decisions in real-time. However, networking latencies and even machine failure may cause events to arrive out-of-order at the event stream processing engine. In this work, we address the problem of processing event pattern queries specified over event streams that may contain out-of-order data. First, we analyze the problems state-of-the-art event stream processing technology would experience when faced with out-of-order data arrival. We then propose a new solution of physical implementation strategies for the core stream algebra operators such as sequence scan and pattern construction, including stack- based data structures and associated purge algorithms. Optimizations for sequence scan and construction as well as state purging to minimize CPU cost and memory consumption are also introduced. Lastly, we conduct an experimental study demonstrating the effectiveness of our approach.",
"title": ""
}
] | scidocsrr |
d01a3e2b37d2ff79ae457089d8d12c4f | Understanding Graph Sampling Algorithms for Social Network Analysis | [
{
"docid": "424b80d94ec00c6795d8c8a689c1d119",
"text": "With more than 250 million active users, Facebook (FB) is currently one of the most important online social networks. Our goal in this paper is to obtain a representative (unbiased) sample of Facebook users by crawling its social graph. In this quest, we consider and implement several candidate techniques. Two approaches that are found to perform well are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the \"ground-truth\" (UNI - obtained through true uniform sampling of FB userIDs). In contrast, the traditional Breadth-First-Search (BFS) and Random Walk (RW) perform quite poorly, producing substantially biased results. In addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these can be used to effectively determine when a random walk sample is of adequate size and quality for subsequent use (i.e., when it is safe to cease sampling). Using these methods, we collect the first, to the best of our knowledge, unbiased sample of Facebook. Finally, we use one of our representative datasets, collected through MHRW, to characterize several key properties of Facebook.",
"title": ""
},
{
"docid": "29e5d267bebdeb2aa22b137219b4407e",
"text": "Social networks are popular platforms for interaction, communication and collaboration between friends. Researchers have recently proposed an emerging class of applications that leverage relationships from social networks to improve security and performance in applications such as email, web browsing and overlay routing. While these applications often cite social network connectivity statistics to support their designs, researchers in psychology and sociology have repeatedly cast doubt on the practice of inferring meaningful relationships from social network connections alone.\n This leads to the question: Are social links valid indicators of real user interaction? If not, then how can we quantify these factors to form a more accurate model for evaluating socially-enhanced applications? In this paper, we address this question through a detailed study of user interactions in the Facebook social network. We propose the use of interaction graphs to impart meaning to online social links by quantifying user interactions. We analyze interaction graphs derived from Facebook user traces and show that they exhibit significantly lower levels of the \"small-world\" properties shown in their social graph counterparts. This means that these graphs have fewer \"supernodes\" with extremely high degree, and overall network diameter increases significantly as a result. To quantify the impact of our observations, we use both types of graphs to validate two well-known social-based applications (RE and SybilGuard). The results reveal new insights into both systems, and confirm our hypothesis that studies of social applications should use real indicators of user interactions in lieu of social graphs.",
"title": ""
}
] | [
{
"docid": "86aa31d70e44137ff16e81f79e1dac74",
"text": "The bee genus Lasioglossum Curtis is a model taxon for studying the evolutionary origins of and reversals in eusociality. This paper presents a phylogenetic analysis of Lasioglossum species and subgenera based on a data set consisting of 1240 bp of the mitochondrial cytochrome oxidase I (COI) gene for seventy-seven taxa (sixty-six ingroup and eleven outgroup taxa). Maximum parsimony was used to analyse the data set (using PAUP*4.0) by a variety of weighting methods, including equal weights, a priori weighting and a posteriori weighting. All methods yielded roughly congruent results. Michener's Hemihalictus series was found to be monophyletic in all analyses but one, while his Lasioglossum series formed a basal, paraphyletic assemblage in all analyses but one. Chilalictus was consistently found to be a basal taxon of Lasioglossum sensu lato and Lasioglossum sensu stricto was found to be monophyletic. Within the Hemihalictus series, major lineages included Dialictus + Paralictus, the acarinate Evylaeus + Hemihalictus + Sudila and the carinate Evylaeus + Sphecodogastra. Relationships within the Hemihalictus series were highly stable to altered weighting schemes, while relationships among the basal subgenera in the Lasioglossum series (Lasioglossum s.s., Chilalictus, Parasphecodes and Ctenonomia) were unclear. The social parasite of Dialictus, Paralictus, is consistently and unambiguously placed well within Dialictus, thus rendering Dialictus paraphyletic. The implications of this for understanding the origins of social parasitism are discussed.",
"title": ""
},
{
"docid": "6a9e30fd08b568ef6607158cab4f82b2",
"text": "Expertise with unfamiliar objects (‘greebles’) recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.",
"title": ""
},
{
"docid": "6659548688b11af67efd6996c0a6f07e",
"text": "We propose a single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses. Unlike a recently proposed single-shot technique for this task [10] that only predicts an approximate 6D pose that must then be refined, ours is accurate enough not to require additional post-processing. As a result, it is much faster - 50 fps on a Titan X (Pascal) GPU - and more suitable for real-time processing. The key component of our method is a new CNN architecture inspired by [27, 28] that directly predicts the 2D image locations of the projected vertices of the object's 3D bounding box. The object's 6D pose is then estimated using a PnP algorithm. For single object and multiple object pose estimation on the LINEMOD and OCCLUSION datasets, our approach substantially outperforms other recent CNN-based approaches [10, 25] when they are all used without postprocessing. During post-processing, a pose refinement step can be used to boost the accuracy of these two methods, but at 10 fps or less, they are much slower than our method.",
"title": ""
},
{
"docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7",
"text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.",
"title": ""
},
{
"docid": "0722c349880f7d5e32658091d59a8ee3",
"text": "The purposes of the study are to explore the effects among brand awareness, perceived quality, brand loyalty and customer purchase intention and mediating effects of perceived quality and brand loyalty on brand awareness and purchase intention. The samples are collected from cellular phone users living in Chiyi, and the research adopts regression analysis and mediating test to examine the hypotheses. The results are: (a) the relations among the brand awareness, perceived quality and brand loyalty for purchase intention are significant and positive effect, (b) perceived quality has a positive effect on brand loyalty, (c) perceived quality will meditate the effects between brand awareness and purchase intention, and (d) brand loyalty will mediate the effects between brand awareness and purchase intention. The study suggests that cellular phone manufacturers ought to build a brand and promote its brand awareness through sales promotion, advertising, and other marketing activities. When brand awareness is high, its brand loyalty will also increase. Consumers will evaluate perceived quality of a product from their purchase experience. As a result, brand loyalty and brand preference will increase and also purchase intention.",
"title": ""
},
{
"docid": "0bfba7797a0e7dcd4817c10d4df350db",
"text": "Rapid and accurate counting and recognition of flying insects are of great importance, especially for pest control. Traditional manual identification and counting of flying insects is labor intensive and inefficient. In this study, a vision-based counting and classification system for flying insects is designed and implemented. The system is constructed as follows: firstly, a yellow sticky trap is installed in the surveillance area to trap flying insects and a camera is set up to collect real-time images. Then the detection and coarse counting method based on You Only Look Once (YOLO) object detection, the classification method and fine counting based on Support Vector Machines (SVM) using global features are designed. Finally, the insect counting and recognition system is implemented on Raspberry PI. Six species of flying insects including bee, fly, mosquito, moth, chafer and fruit fly are selected to assess the effectiveness of the system. Compared with the conventional methods, the test results show promising performance. The average counting accuracy is 92.50% and average classifying accuracy is 90.18% on Raspberry PI. The proposed system is easy-to-use and provides efficient and accurate recognition data, therefore, it can be used for intelligent agriculture applications.",
"title": ""
},
{
"docid": "5c7678fae587ef784b4327d545a73a3e",
"text": "The vision of Future Internet based on standard communication protocols considers the merging of computer networks, Internet of Things (IoT), Internet of People (IoP), Internet of Energy (IoE), Internet of Media (IoM), and Internet of Services (IoS), into a common global IT platform of seamless networks and networked “smart things/objects”. However, with the widespread deployment of networked, intelligent sensor technologies, an Internet of Things (IoT) is steadily evolving, much like the Internet decades ago. In the future, hundreds of billions of smart sensors and devices will interact with one another without human intervention, on a Machine-to-Machine (M2M) basis. They will generate an enormous amount of data at an unprecedented scale and resolution, providing humans with information and control of events and objects even in remote physical environments. This paper will provide an overview of performance evaluation, challenges and opportunities of IOT results for machine learning presented by this new paradigm.",
"title": ""
},
{
"docid": "dda3f8ba562da73d2561c1a1ee50290e",
"text": "For decades, studies of endocrine-disrupting chemicals (EDCs) have challenged traditional concepts in toxicology, in particular the dogma of \"the dose makes the poison,\" because EDCs can have effects at low doses that are not predicted by effects at higher doses. Here, we review two major concepts in EDC studies: low dose and nonmonotonicity. Low-dose effects were defined by the National Toxicology Program as those that occur in the range of human exposures or effects observed at doses below those used for traditional toxicological studies. We review the mechanistic data for low-dose effects and use a weight-of-evidence approach to analyze five examples from the EDC literature. Additionally, we explore nonmonotonic dose-response curves, defined as a nonlinear relationship between dose and effect where the slope of the curve changes sign somewhere within the range of doses examined. We provide a detailed discussion of the mechanisms responsible for generating these phenomena, plus hundreds of examples from the cell culture, animal, and epidemiology literature. We illustrate that nonmonotonic responses and low-dose effects are remarkably common in studies of natural hormones and EDCs. Whether low doses of EDCs influence certain human disorders is no longer conjecture, because epidemiological studies show that environmental exposures to EDCs are associated with human diseases and disabilities. We conclude that when nonmonotonic dose-response curves occur, the effects of low doses cannot be predicted by the effects observed at high doses. Thus, fundamental changes in chemical testing and safety determination are needed to protect human health.",
"title": ""
},
{
"docid": "8c89db7cda2547a9f84dec7a0990cd59",
"text": "In this paper, a changeable winding brushless DC (BLDC) motor for the expansion of the speed region is described. The changeable winding BLDC motor is driven by a large number of phase turns at low speeds and by a reduced number of turns at high speeds. For this reason, the section where the winding changes is very important. Ideally, the time at which the windings are to be converted should be same as the time at which the voltage changes. However, if this timing is not exactly synchronized, a large current is generated in the motor, and the demagnetization of the permanent magnet occurs. In addition, a large torque ripple is produced. In this paper, we describe the demagnetization of the permanent magnet in a fault situation when the windings change, and we suggest a design process to solve this problem.",
"title": ""
},
{
"docid": "a4ec796aa94914eead676eac4a688753",
"text": "Providing transactional primitives of NAND flash based solid state disks (SSDs) have demonstrated a great potential for high performance transaction processing and relieving software complexity. Similar with software solutions like write-ahead logging (WAL) and shadow paging, transactional SSD has two parts of overhead which include: 1) write overhead under normal condition, and 2) recovery overhead after power failures. Prior transactional SSD designs utilize out-of-band (OOB) area in flash pages to store transaction information to reduce the first part of overhead. However, they are required to scan a large part of or even whole SSD after power failures to abort unfinished transactions. Another limitation of prior approaches is the unicity of transactional primitive they provided. In this paper, we propose a new transactional SSD design named Möbius. Möbius provides different types of transactional primitives to support static and dynamic transactions separately. Möbius flash translation layer (mFTL), which combines normal FTL with transaction processing by storing mapping and transaction information together in a physical flash page as atom inode. By amortizing the cost of transaction processing with FTL persistence, MFTL achieve high performance in normal condition and does not increase write amplification ratio. After power failures, Möbius can leverage atom inode to eliminate unnecessary scanning and recover quickly. We implemented a prototype of Möbius and compare it with other state-of-art transactional SSD designs. Experimental results show that Möbius can at most 67% outperform in transaction throughput (TPS) and 29 times outperform in recovery time while still have similar or even better write amphfication ratio comparing with prior hardware approaches.",
"title": ""
},
{
"docid": "7a883f32f86dd6c9dbde6f0443072157",
"text": "Gaussian process (GP) regression models make for powerful predictors in out of sample exercises, but cubic runtimes for dense matrix decompositions severely limit the size of data—training and testing—on which they can be deployed. That means that in computer experiment, spatial/geo-physical, and machine learning contexts, GPs no longer enjoy privileged status as data sets continue to balloon in size. We discuss an implementation of local approximate Gaussian process models, in the laGP package for R, that offers a particular sparse-matrix remedy uniquely positioned to leverage modern parallel computing architectures. The laGP approach can be seen as an update on the spatial statistical method of local kriging neighborhoods. We briefly review the method, and provide extensive illustrations of the features in the package through worked-code examples. The appendix covers custom building options for symmetric multi-processor and graphical processing units, and built-in wrapper routines that automate distribution over a simple network of workstations.",
"title": ""
},
{
"docid": "4b84b6936669a2496e5172de0023c965",
"text": "We present a patient with partial monosomy of the short arm of chromosome 18 caused by de novo translocation t(Y;18) and a generalized form of keratosis pilaris (keratosis pilaris affecting the skin follicles of the trunk, limbs and face-ulerythema ophryogenes). Two-color FISH with centromere-specific Y and 18 DNA probes identified the derivative chromosome 18 as a dicentric with breakpoints in p11.2 on both involved chromosomes. The patient had another normal Y chromosome. This is a third report the presence of a chromosome 18p deletion (and first case of a translocation involving 18p and a sex chromosome) with this genodermatosis. Our data suggest that the short arm of chromosome 18 is a candidate region for a gene causing keratosis pilaris. Unmasking of a recessive mutation at the disease locus by deletion of the wild type allele could be the cause of the recessive genodermatosis.",
"title": ""
},
{
"docid": "683e496bd08fe3a55c63ba8788481184",
"text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.",
"title": ""
},
{
"docid": "94a5e443ff4d6a6decdf1aeeb1460788",
"text": "Teaching the computer to understand language is the major goal in the field of natural language processing. In this thesis we introduce computational methods that aim to extract language structure— e.g. grammar, semantics or syntax— from text, which provides the computer with information in order to understand language. During the last decades, scientific efforts and the increase of computational resources made it possible to come closer to the goal of understanding language. In order to extract language structure, many approaches train the computer on manually created resources. Most of these so-called supervised methods show high performance when applied to similar textual data. However, they perform inferior when operating on textual data, which are different to the one they are trained on. Whereas training the computer is essential to obtain reasonable structure from natural language, we want to avoid training the computer using manually created resources. In this thesis, we present so-called unsupervisedmethods, which are suited to learn patterns in order to extract structure from textual data directly. These patterns are learned with methods that extract the semantics (meanings) of words and phrases. In comparison to manually built knowledge bases, unsupervised methods are more flexible: they can extract structure from text of different languages or text domains (e.g. finance or medical texts), without requiring manually annotated structure. However, learning structure from text often faces sparsity issues. The reason for these phenomena is that in language many words occur only few times. If a word is seen only few times no precise information can be extracted from the text it occurs. Whereas sparsity issues cannot be solved completely, information about most words can be gained by using large amounts of data. In the first chapter, we briefly describe how computers can learn to understand language. Afterwards, we present the main contributions, list the publications this thesis is based on and give an overview of this thesis. Chapter 2 introduces the terminology used in this thesis and gives a background about natural language processing. Then, we characterize the linguistic theory on how humans understand language. Afterwards, we show how the underlying linguistic intuition can be",
"title": ""
},
{
"docid": "03ddb008ceafca5d3251f405cb9daa36",
"text": "Mammographic risk scoring has commonly been automated by extracting a set of handcrafted features from mammograms, and relating the responses directly or indirectly to breast cancer risk. We present a method that learns a feature hierarchy from unlabeled data. When the learned features are used as the input to a simple classifier, two different tasks can be addressed: i) breast density segmentation, and ii) scoring of mammographic texture. The proposed model learns features at multiple scales. To control the models capacity a novel sparsity regularizer is introduced that incorporates both lifetime and population sparsity. We evaluated our method on three different clinical datasets. Our state-of-the-art results show that the learned breast density scores have a very strong positive relationship with manual ones, and that the learned texture scores are predictive of breast cancer. The model is easy to apply and generalizes to many other segmentation and scoring problems.",
"title": ""
},
{
"docid": "ab4a788fd82d5953e22032b1361328c2",
"text": "To recognize application of Artificial Neural Networks (ANNs) in weather forecasting, especially in rainfall forecasting a comprehensive literature review from 1923 to 2012 is done and presented in this paper. And it is found that architectures of ANN such as BPN, RBFN is best established to be forecast chaotic behavior and have efficient enough to forecast monsoon rainfall as well as other weather parameter prediction phenomenon over the smaller geographical region.",
"title": ""
},
{
"docid": "842cd58edd776420db869e858be07de4",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "a92890bd940e28598b067f427a4ee04f",
"text": "Especially in times of high raw material prices as a result of limited availability of feed ingredients nutritionist look for ways to keep feed cost as low as possible. Part of this discussion is whether certain ingredients can be replaced by others while avoiding impairments of performance. This discussion sometimes includes the question whether supplemental methionine can be replaced by betaine. Use of supplemental methionine, choline and betaine is common in broiler diets. Biochemically, all three compounds can act as methyl group donors. Figure 1 illustrates metabolic pathways connecting choline, betaine and methionine. This chart shows that choline is transformed to betaine which can then deliver a CH3-group for methylation reactions. One of those reactions is the methylation of homocysteine to methionine. This reaction occurs as part of the homocysteine cycle, which continues by transferring the methyl group further and yielding homocysteine again. Thus, there is no net yield of methionine from this cycle, since it only functions to transport a methyl group.",
"title": ""
},
{
"docid": "c8269e0a67ab7f1af77a1ff5d602fd87",
"text": "Cryptanalysis identifies weaknesses of ciphers and investigates methods to exploit them in order to compute the plaintext and/or the secret cipher key. Exploitation is nontrivial and, in many cases, weaknesses have been shown to be effective only on reduced versions of the ciphers. In this paper we apply artificial neural networks to automatically “assist” cryptanalysts into exploiting cipher weaknesses. The networks are trained by providing data in a form that points out the weakness together with the encryption key, until the network is able to generalize and predict the key (or evaluate its likelihood) for any possible ciphertext. We illustrate the effectiveness of the approach through simple classical ciphers, by providing the first ciphertext-only attack on substitution ciphers based on neural networks.",
"title": ""
},
{
"docid": "3d6744ae85a9aa07d8c4cb68c79290c7",
"text": "Control over the motional degrees of freedom of atoms, ions, and molecules in a field-free environment enables unrivalled measurement accuracies but has yet to be applied to highly charged ions (HCIs), which are of particular interest to future atomic clock designs and searches for physics beyond the Standard Model. Here, we report on the Coulomb crystallization of HCIs (specifically 40Ar13+) produced in an electron beam ion trap and retrapped in a cryogenic linear radiofrequency trap by means of sympathetic motional cooling through Coulomb interaction with a directly laser-cooled ensemble of Be+ ions. We also demonstrate cooling of a single Ar13+ ion by a single Be+ ion—the prerequisite for quantum logic spectroscopy with a potential 10−19 accuracy level. Achieving a seven-orders-of-magnitude decrease in HCI temperature starting at megakelvin down to the millikelvin range removes the major obstacle for HCI investigation with high-precision laser spectroscopy.",
"title": ""
}
] | scidocsrr |
b0dd3f1aad518c98c1f4ff4f042a5703 | Semantic smart grid services: Enabling a standards-compliant Internet of energy platform with IEC 61850 and OPC UA | [
{
"docid": "ed06226e548fac89cc06a798618622c6",
"text": "Exciting yet challenging times lie ahead. The electrical power industry is undergoing rapid change. The rising cost of energy, the mass electrification of everyday life, and climate change are the major drivers that will determine the speed at which such transformations will occur. Regardless of how quickly various utilities embrace smart grid concepts, technologies, and systems, they all agree onthe inevitability of this massive transformation. It is a move that will not only affect their business processes but also their organization and technologies.",
"title": ""
},
{
"docid": "3bc9eb46e389b7be4141950142c606dd",
"text": "Within this contribution, we outline the use of the new automation standards family OPC Unified Architecture (IEC 62541) in scope with the IEC 61850 field automation standard. The IEC 61850 provides both an abstract data model and an abstract communication interface. Different technology mappings to implement the model exist. With the upcoming OPC UA, a new communication model to implement abstract interfaces has been introduced. We outline its use in this contribution and also give examples on how it can be used alongside the IEC 61970 Common Information Model to properly integrate ICT and field automation at communication standards level.",
"title": ""
}
] | [
{
"docid": "008f94637ed982a75c51577f4bfc3c34",
"text": "Revelations of large scale electronic surveillance and data mining by governments and corporations have fueled increased adoption of HTTPS. We present a traffic analysis attack against over 6000 webpages spanning the HTTPS deployments of 10 widely used, industryleading websites in areas such as healthcare, finance, legal services and streaming video. Our attack identifies individual pages in the same website with 89% accuracy, exposing personal details including medical conditions, financial and legal affairs and sexual orientation. We examine evaluation methodology and reveal accuracy variations as large as 18% caused by assumptions affecting caching and cookies. We present a novel defense reducing attack accuracy to 27% with a 9% traffic increase, and demonstrate significantly increased effectiveness of prior defenses in our evaluation context, inclusive of enabled caching, user-specific cookies and pages within the same website.",
"title": ""
},
{
"docid": "a5e23ca50545378ef32ed866b97fd418",
"text": "In the framework of computer assisted diagnosis of diabetic retinopathy, a new algorithm for detection of exudates is presented and discussed. The presence of exudates within the macular region is a main hallmark of diabetic macular edema and allows its detection with a high sensitivity. Hence, detection of exudates is an important diagnostic task, in which computer assistance may play a major role. Exudates are found using their high grey level variation, and their contours are determined by means of morphological reconstruction techniques. The detection of the optic disc is indispensable for this approach. We detect the optic disc by means of morphological filtering techniques and the watershed transformation. The algorithm has been tested on a small image data base and compared with the performance of a human grader. As a result, we obtain a mean sensitivity of 92.8% and a mean predictive value of 92.4%. Robustness with respect to changes of the parameters of the algorithm has been evaluated.",
"title": ""
},
{
"docid": "f905016b422d9c16ac11b85182f196c7",
"text": "The random forest (RF) classifier is an ensemble classifier derived from decision tree idea. However the parallel operations of several classifiers along with use of randomness in sample and feature selection has made the random forest a very strong classifier with accuracy rates comparable to most of currently used classifiers. Although, the use of random forest on handwritten digits has been considered before, in this paper RF is applied in recognizing Persian handwritten characters. Trying to improve the recognition rate, we suggest converting the structure of decision trees from a binary tree to a multi branch tree. The improvement gained this way proves the applicability of the idea.",
"title": ""
},
{
"docid": "956b7139333421343e8ed245a63a7b4b",
"text": "Purpose – During the last decades, different quality management concepts, including total quality management (TQM), six sigma and lean, have been applied by many different organisations. Although much important work has been documented regarding TQM, six sigma and lean, a number of questions remain concerning the applicability of these concepts in various organisations and contexts. Hence, the purpose of this paper is to describe the similarities and differences between the concepts, including an evaluation and criticism of each concept. Design/methodology/approach – Within a case study, a literature review and face-to-face interviews in typical TQM, six sigma and lean organisations have been carried out. Findings – While TQM, six sigma and lean have many similarities, especially concerning origin, methodologies, tools and effects, they differ in some areas, in particular concerning the main theory, approach and the main criticism. The lean concept is slightly different from TQM and six sigma. However, there is a lot to gain if organisations are able to combine these three concepts, as they are complementary. Six sigma and lean are excellent road-maps, which could be used one by one or combined, together with the values in TQM. Originality/value – The paper provides guidance to organisations regarding the applicability and properties of quality concepts. Organisations need to work continuously with customer-orientated activities in order to survive; irrespective of how these activities are labelled. The paper will also serve as a basis for further research in this area, focusing on practical experience of these concepts.",
"title": ""
},
{
"docid": "4d18ea8816e9e4abf428b3f413c82f9e",
"text": "This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.",
"title": ""
},
{
"docid": "bf7d502a818ac159cf402067b4416858",
"text": "We present algorithms for evaluating and performing modeling operatyons on NURBS surfaces using the programmable fragment processor on the Graphics Processing Unit (GPU). We extend our GPU-based NURBS evaluator that evaluates NURBS surfaces to compute exact normals for either standard or rational B-spline surfaces for use in rendering and geometric modeling. We build on these calculations in our new GPU algorithms to perform standard modeling operations such as inverse evaluations, ray intersections, and surface-surface intersections on the GPU. Our modeling algorithms run in real time, enabling the user to sketch on the actual surface to create new features. In addition, the designer can edit the surface by interactively trimming it without the need for re-tessellation. We also present a GPU-accelerated algorithm to perform surface-surface intersection operations with NURBS surfaces that can output intersection curves in the model space as well as in the parametric spaces of both the intersecting surfaces at interactive rates.",
"title": ""
},
{
"docid": "b3f423e513c543ecc9fe7003ff9880ea",
"text": "Increasing attention has been paid to air quality monitoring with a rapid development in industry and transportation applications in the modern society. However, the existing air quality monitoring systems cannot provide satisfactory spatial and temporal resolutions of the air quality information with low costs in real time. In this paper, we propose a new method to implement the air quality monitoring system based on state-of-the-art Internet-of-Things (IoT) techniques. In this system, portable sensors collect the air quality information timely, which is transmitted through a low power wide area network. All air quality data are processed and analyzed in the IoT cloud. The completed air quality monitoring system, including both hardware and software, is developed and deployed successfully in urban environments. Experimental results show that the proposed system is reliable in sensing the air quality, which helps reveal the change patterns of air quality to some extent.",
"title": ""
},
{
"docid": "b7062e40643ff1b879247a3f4ec3b07f",
"text": "The question of whether there are different patterns of autonomic nervous system responses for different emotions is examined. Relevant conceptual issues concerning both the nature of emotion and the structure of the autonomic nervous system are discussed in the context of the development of research methods appropriate for studying this question. Are different emotional states associated with distinct patterns of autonomic nervous system (ANS) activity? This is an old question that is currently enjoying a modest revival in psychology. In the 1950s autonomic specificity was a key item on the agenda of the newly emerging discipline of psychophysiology, which saw as its mission the scientific exploration of the mind-body relationship using the tools of electrophysiological measurement. But the field of psychophysiology had the misfortune of coming of age during a period in which psychology drifted away from its physiological roots, a period in which psychology was dominated by learning, behaviourism, personality theory and later by cognition. Psychophysiology in the period between 1960 and 1980 reflected these broader trends in psychology by focusing on such issues as autonomic markers of perceptual states (e.g. orienting, stimulus processing), the interplay between personality factors and ANS responsivity, operant conditioning of autonomic functions, and finally, electrophysiological markers of cognitive states. Research on autonomic specificity in emotion became increasingly rare. Perhaps as a result of these historical trends in psychology, or perhaps because research on emotion and physiology is so difficult to do well, there 18 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION exists only a small body of studies on ANS specificity. Although almost all of these studies report some evidence for the existence of specificity, the prevailing zeitgeist has been that specificity has not been empirically established. At this point in time a review of the existing literature would not be very informative, for it would inevitably dissolve into a critique of methods. Instead, what I hope to accomplish in this chapter is to provide a new framework for thinking about ANS specificity, and to propose guidelines for carrying out research on this issue that will be cognizant of the recent methodological and theoretical advances that have been made both in psychophysiology and in research on emotion. Emotion as organization From the outset, the definition of emotion that underlies this chapter should be made explicit. For me the essential function of emotion is organization. The selection of emotion for preservation across time and species is based on the need for an efficient mechanism than can mobilize and organize disparate response systems to deal with environmental events that pose a threat to survival. In this view the prototypical context for human emotions is those situations in which a multi-system response must be organized quickly, where time is not available for the lengthy processes of deliberation, reformulation, planning and rehearsal; where a fine degree of co-ordination is required among systems as disparate as the muscles of the face and the organs of the viscera; and where adaptive behaviours that normally reside near the bottom of behavioural hierarchies must be instantaneously shifted to the top. Specificity versus undifferentiated arousal In this model of emotion as organization it is assumed that each component system is capable of a number of different responses, and that the emotion will guide the selection of responses from each system. Component systems differ in terms of the number of response possibilities. Thus, in the facial expressive system a selection must be made among a limited set of prototypic emotional expressions (which are but a subset of the enormous number of expressions the face is capable of assuming). A motor behaviour must also be selected from a similarly reduced set of responses consisting of fighting, fleeing, freezing, hiding, etc. All major theories of emotion would accept the proposition that activation of the ANS is one of the changes that occur during emotion. But theories differ as to how many different ANS patterns constitute the set of selection possibilities. At one extreme are those who would argue that there are only two ANS patterns: 'off' and 'on'. The 'on' ANS pattern, according to this view, consists EMOTION AND THE AUTONOMIC NERVOUS SYSTEM 19 of a high-level, global, diffuse ANS activation, mediated primarily by the sympathetic branch of the ANS. The manifestations of this pattern rapid and forcefulcontractions of the heart, rapid and deep breathing, increased systolic blood pressure, sweating, dry mouth, redirection of blood flow to large skeletal muscles, peripheral vasoconstriction, release of large amounts of epinephrine and norepinephrine from the adrenal medulla, and the resultant release of glucose from the liver are well known. Cannon (1927) described this pattern in some detail, arguing that this kind of high-intensity, undifferentiated arousal accompanied all emotions .. Among contemporary theories the notion of undifferentiated arousal is most clearly found in Mandler's theory (Mandler, 1975). However, undifferentiated arousal also played a major role in the extraordinarily influential cognitive/physiological theory of Schachter and Singer (1962). According to this theory, undifferentiated arousal is a necessary precondition for emotionan extremely plastic medium to be moulded by cognitive processes working in concert with the available cues from the social environment. At the other extreme are those who argue that there are a large number of patterns of ANS activation, each associated with a different emotion (or subset of emotions). This is the traditional specificity position. Its classic statement is often attributed to James (1884), although Alexander (1950) provided an even more radical version. The specificity position fuelled a number of experimental studies in the 1950s and 1960s, all attempting to identify some of these autonomic patterns (e.g. Averill, 1969; Ax, 1953; Funkenstein, King and Drolette, 1954; Schachter, 1957; Sternbach, 1962). Despite these studies, all of which reported support for ANS specificity, the undifferentiated arousal theory, especially as formulated by Schachter and Singer (1962) and their followers, has been dominant for a great many years. Is the ANS capable of specific action No matter how appealing the notion of ANS specificity might be in the abstract, there would be little reason to pursue it in the laboratory if the ANS were only capable of producing one pattern of arousal. There is no question that the pattern of high-level sympathetic arousal described earlier is one pattern that the ANS can produce. Cannon's arguments notwithstanding, I believe there now is quite ample evidence that the ANS is capable of a number of different patterns of activation. Whether these patterns are reliably associated with different emotions remains an empirical question, but the potential is surely there. A case in support of this potential for specificity can be based on: (a) the neural structure of the ANS; (b) the stimulation neurochemistry of the ANS; and (c) empirical findings. 20 SOCIAL PSYCHOPHYSIOLOGY AND EMOTION",
"title": ""
},
{
"docid": "7b0e63115a7d085a180e047ae1ab2139",
"text": "We describe a set of tools for retail analytics based on a combination of video understanding and transaction-log. Tools are provided for loss prevention (returns fraud and cashier fraud), store operations (customer counting) and merchandising (display effectiveness). Results are presented on returns fraud and customer counting.",
"title": ""
},
{
"docid": "09e2a91a25e4ecccc020a91e14a35282",
"text": "A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.",
"title": ""
},
{
"docid": "c97e005d827b712e7d61d8a911c3bed6",
"text": "Industries and individuals outsource database to realize convenient and low-cost applications and services. In order to provide sufficient functionality for SQL queries, many secure database schemes have been proposed. However, such schemes are vulnerable to privacy leakage to cloud server. The main reason is that database is hosted and processed in cloud server, which is beyond the control of data owners. For the numerical range query (“>,” “<,” and so on), those schemes cannot provide sufficient privacy protection against practical challenges, e.g., privacy leakage of statistical properties, access pattern. Furthermore, increased number of queries will inevitably leak more information to the cloud server. In this paper, we propose a two-cloud architecture for secure database, with a series of intersection protocols that provide privacy preservation to various numeric-related range queries. Security analysis shows that privacy of numerical information is strongly protected against cloud providers in our proposed scheme.",
"title": ""
},
{
"docid": "6c2b19b2888d00fccb1eae37352d653d",
"text": "Between June 1985 and January 1987, the Therac-25 medical electron accelerator was involved in six massive radiation overdoses. As a result, several people died and others were seriously injured. A detailed investigation of the factors involved in the software-related overdoses and attempts by users, manufacturers, and government agencies to deal with the accidents is presented. The authors demonstrate the complex nature of accidents and the need to investigate all aspects of system development and operation in order to prevent future accidents. The authors also present some lessons learned in terms of system engineering, software engineering, and government regulation of safety-critical systems containing software components.<<ETX>>",
"title": ""
},
{
"docid": "7dc652c9b86f63c0a6b546396980783b",
"text": "An affine invariant representation is constructed with a cascade of invariants, which preserves information for classification. A joint translation and rotation invariant representation of image patches is calculated with a scattering transform. It is implemented with a deep convolution network, which computes successive wavelet transforms and modulus non-linearities. Invariants to scaling, shearing and small deformations are calculated with linear operators in the scattering domain. State-of-the-art classification results are obtained over texture databases with uncontrolled viewing conditions.",
"title": ""
},
{
"docid": "39e71a3228331eb8b1574173cfb1e04a",
"text": "Euler Number is one of the most important characteristics in topology. In two-dimension digital images, the Euler characteristic is locally computable. The form of Euler Number formula is different under 4-connected and 8-connected conditions. Based on the definition of the Foreground Segment and Neighbor Number, a formula of the Euler Number computing is proposed and is proved in this paper. It is a new idea to locally compute Euler Number of 2D image.",
"title": ""
},
{
"docid": "b2d1a0befef19d466cd29868d5cf963b",
"text": "Accurate prediction of the functional effect of genetic variation is critical for clinical genome interpretation. We systematically characterized the transcriptome effects of protein-truncating variants, a class of variants expected to have profound effects on gene function, using data from the Genotype-Tissue Expression (GTEx) and Geuvadis projects. We quantitated tissue-specific and positional effects on nonsense-mediated transcript decay and present an improved predictive model for this decay. We directly measured the effect of variants both proximal and distal to splice junctions. Furthermore, we found that robustness to heterozygous gene inactivation is not due to dosage compensation. Our results illustrate the value of transcriptome data in the functional interpretation of genetic variants.",
"title": ""
},
{
"docid": "c51e1b845d631e6d1b9328510ef41ea0",
"text": "Accurate interference models are important for use in transmission scheduling algorithms in wireless networks. In this work, we perform extensive modeling and experimentation on two 20-node TelosB motes testbeds -- one indoor and the other outdoor -- to compare a suite of interference models for their modeling accuracies. We first empirically build and validate the physical interference model via a packet reception rate vs. SINR relationship using a measurement driven method. We then similarly instantiate other simpler models, such as hop-based, range-based, protocol model, etc. The modeling accuracies are then evaluated on the two testbeds using transmission scheduling experiments. We observe that while the physical interference model is the most accurate, it is still far from perfect, providing a 90-percentile error about 20-25% (and 80 percentile error 7-12%), depending on the scenario. The accuracy of the other models is worse and scenario-specific. The second best model trails the physical model by roughly 12-18 percentile points for similar accuracy targets. Somewhat similar throughput performance differential between models is also observed when used with greedy scheduling algorithms. Carrying on further, we look closely into the the two incarnations of the physical model -- 'thresholded' (conservative, but typically considered in literature) and 'graded' (more realistic). We show via solving the one shot scheduling problem, that the graded version can improve `expected throughput' over the thresholded version by scheduling imperfect links.",
"title": ""
},
{
"docid": "57c2422bac0a8f44b186fadbfcadb393",
"text": "In this paper, we propose a vision-based multiple lane boundaries detection and estimation structure that fuses the edge features and the high intensity features. Our approach utilizes a camera as the only input sensor. The application of Kalman filter for information fusion and tracking significantly improves the reliability and robustness of our system. We test our system on roads with different driving scenarios, including day, night, heavy traffic, rain, confusing textures and shadows. The feasibility of our approach is demonstrated by quantitative evaluation using manually labeled video clips.",
"title": ""
},
{
"docid": "838b599024a14e952145af0c12509e31",
"text": "In this paper, a broadband high-power eight-way coaxial waveguide power combiner with axially symmetric structure is proposed. A combination of circuit model and full electromagnetic wave methods is used to simplify the design procedure by increasing the role of the circuit model and, in contrast, reducing the amount of full wave optimization. The presented structure is compact and easy to fabricate. Keeping its return loss greater than 12 dB, the constructed combiner operates within 112% bandwidth from 520 to 1860 MHz.",
"title": ""
},
{
"docid": "6de71e8106d991d2c3d2b845a9e0a67e",
"text": "XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting suchfunctionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we define a language for ECA rules on XML repositories. We then investigate methods for analysing the behaviour of a set of ECA rules, a task which has added complexity in this XML setting compared with conventional active databases.",
"title": ""
},
{
"docid": "007f741a718d0c4a4f181676a39ed54a",
"text": "Following the development of computing and communication technologies, the idea of Internet of Things (IoT) has been realized not only at research level but also at application level. Among various IoT-related application fields, biometrics applications, especially face recognition, are widely applied in video-based surveillance, access control, law enforcement and many other scenarios. In this paper, we introduce a Face in Video Recognition (FivR) framework which performs real-time key-frame extraction on IoT edge devices, then conduct face recognition using the extracted key-frames on the Cloud back-end. With our key-frame extraction engine, we are able to reduce the data volume hence dramatically relief the processing pressure of the cloud back-end. Our experimental results show with IoT edge device acceleration, it is possible to implement face in video recognition application without introducing the middle-ware or cloud-let layer, while still achieving real-time processing speed.",
"title": ""
}
] | scidocsrr |
d63f1e7dcbda8cd429b78be6841859a9 | Permission based Android security: Issues and countermeasures | [
{
"docid": "cb561e56e60ba0e5eef2034158c544c2",
"text": "Android is a modern and popular software platform for smartphones. Among its predominant features is an advanced security model which is based on application-oriented mandatory access control and sandboxing. This allows developers and users to restrict the execution of an application to the privileges it has (mandatorily) assigned at installation time. The exploitation of vulnerabilities in program code is hence believed to be confined within the privilege boundaries of an application’s sandbox. However, in this paper we show that a privilege escalation attack is possible. We show that a genuine application exploited at runtime or a malicious application can escalate granted permissions. Our results immediately imply that Android’s security model cannot deal with a transitive permission usage attack and Android’s sandbox model fails as a last resort against malware and sophisticated runtime attacks.",
"title": ""
}
] | [
{
"docid": "3cdd640f48c1713c3d360da00c634883",
"text": "Hate speech detection in social media texts is an important Natural language Processing task, which has several crucial applications like sentiment analysis, investigating cyber bullying and examining socio-political controversies. While relevant research has been done independently on code-mixed social media texts and hate speech detection, our work is the first attempt in detecting hate speech in HindiEnglish code-mixed social media text. In this paper, we analyze the problem of hate speech detection in code-mixed texts and present a Hindi-English code-mixed dataset consisting of tweets posted online on Twitter. The tweets are annotated with the language at word level and the class they belong to (Hate Speech or Normal Speech). We also propose a supervised classification system for detecting hate speech in the text using various character level, word level, and lexicon based features.",
"title": ""
},
{
"docid": "6c4b9b5383269ed47d2077068652f0b7",
"text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.",
"title": ""
},
{
"docid": "150e7a6f46e93fc917e43e32dedd9424",
"text": "This purpose of this introductory paper is threefold. First, it introduces the Monte Carlo method with emphasis on probabilistic machine learning. Second, it reviews the main building blocks of modern Markov chain Monte Carlo simulation, thereby providing and introduction to the remaining papers of this special issue. Lastly, it discusses new interesting research horizons.",
"title": ""
},
{
"docid": "319a2cf90013976af8ea5cee9f8ddc88",
"text": "Inspired by “GoogleTM Sets”, we consider the problem of retrieving items from a concept or cluster, given a query consisting of a few items from that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a modelbased concept of a cluster and ranks items using a score which evaluates the marginal probability that each item belongs to a cluster containing the query items. For exponential family models with conjugate priors this marginal probability is a simple function of sufficient statistics. We focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to apply our algorithm to very large datasets. We evaluate our algorithm on three datasets: retrieving movies from EachMovie, finding completions of author sets from the NIPS dataset, and finding completions of sets of words appearing in the Grolier encyclopedia. We compare to Google TM Sets and show that Bayesian Sets gives very reasonable set completions.",
"title": ""
},
{
"docid": "f62ea522062fb48860c98140d746ab23",
"text": "Feature selection is widely used in preparing high-dimensional data for effective data mining. The explosive popularity of social media produces massive and high-dimensional data at an unprecedented rate, presenting new challenges to feature selection. Social media data consists of (1) traditional high-dimensional, attribute-value data such as posts, tweets, comments, and images, and (2) linked data that provides social context for posts and describes the relationships between social media users as well as who generates the posts, and so on. The nature of social media also determines that its data is massive, noisy, and incomplete, which exacerbates the already challenging problem of feature selection. In this article, we study a novel feature selection problem of selecting features for social media data with its social context. In detail, we illustrate the differences between attribute-value data and social media data, investigate if linked data can be exploited in a new feature selection framework by taking advantage of social science theories. We design and conduct experiments on datasets from real-world social media Web sites, and the empirical results demonstrate that the proposed framework can significantly improve the performance of feature selection. Further experiments are conducted to evaluate the effects of user--user and user--post relationships manifested in linked data on feature selection, and research issues for future work will be discussed.",
"title": ""
},
{
"docid": "171c903403e1b199a22c980d75217f14",
"text": "The optical microscope remains a widely-used tool for diagnosis and quantitation of malaria. An automated system that can match the performance of well-trained technicians is motivated by a shortage of trained microscopists. We have developed a computer vision system that leverages deep learning to identify malaria parasites in micrographs of standard, field-prepared thick blood films. The prototype application diagnoses P. falciparum with sufficient accuracy to achieve competency level 1 in the World Health Organization external competency assessment, and quantitates with sufficient accuracy for use in drug resistance studies. A suite of new computer vision techniques-global white balance, adaptive nonlinear grayscale, and a novel augmentation scheme-underpin the system's state-of-the-art performance. We outline a rich, global training set; describe the algorithm in detail; argue for patient-level performance metrics for the evaluation of automated diagnosis methods; and provide results for P. falciparum.",
"title": ""
},
{
"docid": "cd5a267c1dac92e68ba677c4a2e06422",
"text": "Person re-identification aims to robustly measure similarities between person images. The significant variation of person poses and viewing angles challenges for accurate person re-identification. The spatial layout and correspondences between query person images are vital information for tackling this problem but are ignored by most state-of-the-art methods. In this paper, we propose a novel Kronecker Product Matching module to match feature maps of different persons in an end-to-end trainable deep neural network. A novel feature soft warping scheme is designed for aligning the feature maps based on matching results, which is shown to be crucial for achieving superior accuracy. The multi-scale features based on hourglass-like networks and self residual attention are also exploited to further boost the re-identification performance. The proposed approach outperforms state-of-the-art methods on the Market-1501, CUHK03, and DukeMTMC datasets, which demonstrates the effectiveness and generalization ability of our proposed approach.",
"title": ""
},
{
"docid": "83b50f380f500bf6e140b3178431f0c6",
"text": "Leader election protocols are a fundamental building block for replicated distributed services. They ease the design of leader-based coordination protocols that tolerate failures. In partially synchronous systems, designing a leader election algorithm, that does not permit multiple leaders while the system is unstable, is a complex task. As a result many production systems use third-party distributed coordination services, such as ZooKeeper and Chubby, to provide a reliable leader election service. However, adding a third-party service such as ZooKeeper to a distributed system incurs additional operational costs and complexity. ZooKeeper instances must be kept running on at least three machines to ensure its high availability. In this paper, we present a novel leader election protocol using NewSQL databases for partially synchronous systems, that ensures at most one leader at any given time. The leader election protocol uses the database as distributed shared memory. Our work enables distributed systems that already use NewSQL databases to save the operational overhead of managing an additional third-party service for leader election. Our main contribution is the design, implementation and validation of a practical leader election algorithm, based on NewSQL databases, that has performance comparable to a leader election implementation using a state-of-the-art distributed coordination service, ZooKeeper.",
"title": ""
},
{
"docid": "a34a49a337cd0d198fe8bcc05f8a91ea",
"text": "In most real-world audio recordings, we encounter several types of audio events. In this paper, we develop a technique for detecting signature audio events, that is based on identifying patterns of occurrences of automatically learned atomic units of sound, which we call Acoustic Unit Descriptors or AUDs. Experiments show that the methodology works as well for detection of individual events and their boundaries in complex recordings.",
"title": ""
},
{
"docid": "948b157586c75674e75bd50b96162861",
"text": "We propose a database design methodology for NoSQL systems. The approach is based on NoAM (NoSQL Abstract Model), a novel abstrac t d ta model for NoSQL databases, which exploits the commonalities of various N SQL systems and is used to specify a system-independent representatio n of the application data. This intermediate representation can be then implemented in target NoSQL databases, taking into account their specific features. Ov rall, the methodology aims at supporting scalability, performance, and consisten cy, as needed by next-generation web applications.",
"title": ""
},
{
"docid": "5ed1a43c51bfca023764a0159449bc68",
"text": "Level Converters are key components of multi-voltage based systems-on-chips. Recently, a great deal of research has been focused on power dissipation reduction using various types of level converters in multi-voltage systems. These level converters include either level up conversion or level down conversion. In this paper we propose a unique level converter called universal level converter (ULC). This level converter is capable of four types of level converting functions, such as up conversion, down conversion, passing and blocking. The universal level converter is simulated in CADENCE using 90nm PTM technology model files. Three types of analysis such as power, parametric and load analysis are performed on the proposed level converter. The power analysis results prove that the proposed level converter has an average power reduction of approximately 87.2% compared to other existing level converters at different technology nodes. The parametric analysis and load analysis show that the proposed level converter provides a stable output for input voltages as low as 0.6V with a varying load from 1fF-200fF. The universal level converter works at dual voltages of 1.2V and 1.02V (85% of Vddh) with VTH value for NMOS as 0.339V and for PMOS as -0.339V. The ULC has an average power consumption of 27.1μW at a load",
"title": ""
},
{
"docid": "cdc3b46933db0c88f482ded1dcdff9e6",
"text": "Overvoltages in low voltage (LV) feeders with high penetration of photovoltaics (PV) are usually prevented by limiting the feeder's PV capacity to very conservative values, even if the critical periods rarely occur. This paper discusses the use of droop-based active power curtailment techniques for overvoltage prevention in radial LV feeders as a means for increasing the installed PV capacity and energy yield. Two schemes are proposed and tested in a typical 240-V/75-kVA Canadian suburban distribution feeder with 12 houses with roof-top PV systems. In the first scheme, all PV inverters have the same droop coefficients. In the second, the droop coefficients are different so as to share the total active power curtailed among all PV inverters/houses. Simulation results demonstrate the effectiveness of the proposed schemes and that the option of sharing the power curtailment among all customers comes at the cost of an overall higher amount of power curtailed.",
"title": ""
},
{
"docid": "9193aad006395bd3bd76cabf44012da5",
"text": "In recent years, there is growing evidence that plant-foods polyphenols, due to their biological properties, may be unique nutraceuticals and supplementary treatments for various aspects of type 2 diabetes mellitus. In this article we have reviewed the potential efficacies of polyphenols, including phenolic acids, flavonoids, stilbenes, lignans and polymeric lignans, on metabolic disorders and complications induced by diabetes. Based on several in vitro, animal models and some human studies, dietary plant polyphenols and polyphenol-rich products modulate carbohydrate and lipid metabolism, attenuate hyperglycemia, dyslipidemia and insulin resistance, improve adipose tissue metabolism, and alleviate oxidative stress and stress-sensitive signaling pathways and inflammatory processes. Polyphenolic compounds can also prevent the development of long-term diabetes complications including cardiovascular disease, neuropathy, nephropathy and retinopathy. Further investigations as human clinical studies are needed to obtain the optimum dose and duration of supplementation with polyphenolic compounds in diabetic patients.",
"title": ""
},
{
"docid": "6025fb8936761dcf3c6751545b430ec0",
"text": "Although many sentiment lexicons in different languages exist, most are not comprehensive. In a recent sentiment analysis application, we used a large Chinese sentiment lexicon and found that it missed a large number of sentiment words used in social media. This prompted us to make a new attempt to study sentiment lexicon expansion. This paper first formulates the problem as a PU learning problem. It then proposes a new PU learning method suitable for the problem based on a neural network. The results are further enhanced with a new dictionary lookup technique and a novel polarity classification algorithm. Experimental results show that the proposed approach greatly outperforms baseline methods.",
"title": ""
},
{
"docid": "d0bdce703addec1bc59e5ab842aedf79",
"text": "This paper presents some of the findings from a recent project that conducted a virtual ethnographic study of three formal courses in higher education that use ‘Web 2.0’or social technologies for learning and teaching. It describes the pedagogies adopted within these courses, and goes on to explore some key themes emerging from the research and relating to the pedagogical use of weblogs and wikis in particular. These themes relate primarily to the academy’s tendency to constrain and contain the possibly more radical effects of these new spaces. Despite this, the findings present a range of student and tutor perspectives which show that these technologies have significant potential as new collaborative, volatile and challenging environments for formal learning.",
"title": ""
},
{
"docid": "17d0da8dd05d5cfb79a5f4de4449fcdd",
"text": "PUBLISHING Thousands of scientists start year without journal access p.13 2017 SNEAK PEEK What the new year holds for science p.14 ECOLOGY What is causing the deaths of so many shorebirds? p.16 PHYSICS Quantum computers ready to leap out of the lab The race is on to turn scientific curiosities into working machines. A front runner in the pursuit of quantum computing uses single ions trapped in a vacuum. Q uantum computing has long seemed like one of those technologies that are 20 years away, and always will be. But 2017 could be the year that the field sheds its research-only image. Computing giants Google and Microsoft recently hired a host of leading lights, and have set challenging goals for this year. Their ambition reflects a broader transition taking place at start-ups and academic research labs alike: to move from pure science towards engineering. \" People are really building things, \" says Christopher Monroe, a physicist at the University of Maryland in College Park who co-founded the start-up IonQ in 2015. \" I've never seen anything like that. It's no longer just research. \" Google started working on a form of quantum computing that harnesses super-conductivity in 2014. It hopes this year, or shortly after, to perform a computation that is beyond even the most powerful 'classical' supercomputers — an elusive milestone known as quantum supremacy. Its rival, Microsoft, is betting on an intriguing but unproven concept, topological quantum computing, and hopes to perform a first demonstration of the technology. The quantum-computing start-up scene is also heating up. Monroe plans to begin hiring in earnest this year. Physicist Robert Schoelkopf at Yale University in New Haven, Connecticut, who co-founded the start-up Quantum Circuits, and former IBM applied physicist Chad Rigetti, who set up Rigetti in",
"title": ""
},
{
"docid": "4520cafacd4794ec942030252652ae7c",
"text": "While the NAND flash memory is widely used as the storage medium in modern sensor systems, the aggressive shrinking of process geometry and an increase in the number of bits stored in each memory cell will inevitably degrade the reliability of NAND flash memory. In particular, it’s critical to enhance metadata reliability, which occupies only a small portion of the storage space, but maintains the critical information of the file system and the address translations of the storage system. Metadata damage will cause the system to crash or a large amount of data to be lost. This paper presents Asymmetric Programming, a highly reliable metadata allocation strategy for MLC NAND flash memory storage systems. Our technique exploits for the first time the property of the multi-page architecture of MLC NAND flash memory to improve the reliability of metadata. The basic idea is to keep metadata in most significant bit (MSB) pages which are more reliable than least significant bit (LSB) pages. Thus, we can achieve relatively low bit error rates for metadata. Based on this idea, we propose two strategies to optimize address mapping and garbage collection. We have implemented Asymmetric Programming on a real hardware platform. The experimental results show that Asymmetric Programming can achieve a reduction in the number of page errors of up to 99.05% with the baseline error correction scheme. OPEN ACCESS Sensors 2014, 14 18852",
"title": ""
},
{
"docid": "ebbe58dcb5ca5374af503592e00956e3",
"text": "Our generation has seen the boom and ubiquitous advent of Internet connectivity. Adversaries have been exploiting this omnipresent connectivity as an opportunity to launch cyber attacks. As a consequence, researchers around the globe devoted a big attention to data mining and machine learning with emphasis on improving the accuracy of intrusion detection system (IDS). In this paper, we present a few-shot deep learning approach for improved intrusion detection. We first trained a deep convolutional neural network (CNN) for intrusion detection. We then extracted outputs from different layers in the deep CNN and implemented a linear support vector machine (SVM) and 1-nearest neighbor (1-NN) classifier for few-shot intrusion detection. few-shot learning is a recently developed strategy to handle situation where training samples for a certain class are limited. We applied our proposed method to the two well-known datasets simulating intrusion in a military network: KDD 99 and NSL-KDD. These datasets are imbalanced, and some classes have much less training samples than others. Experimental results show that the proposed method achieved better performances than the state-of-the-art on those two datasets.",
"title": ""
},
{
"docid": "20b00a2cc472dfec851f4aea42578a9e",
"text": "The self-regulatory strength model maintains that all acts of self-regulation, self-control, and choice result in a state of fatigue called ego-depletion. Self-determination theory differentiates between autonomous regulation and controlled regulation. Because making decisions represents one instance of self-regulation, the authors also differentiate between autonomous choice and controlled choice. Three experiments support the hypothesis that whereas conditions representing controlled choice would be egodepleting, conditions that represented autonomous choice would not. In Experiment 3, the authors found significant mediation by perceived self-determination of the relation between the choice condition (autonomous vs. controlled) and ego-depletion as measured by performance.",
"title": ""
},
{
"docid": "96669cea810d2918f2d35875f87d45f2",
"text": "In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, a part-of-speech tagger (called TreeTagger) has been implemented which achieves 96.36 % accuracy on Penn-Treebank data which is better than that of a trigram tagger (96.06 %) on the same data.",
"title": ""
}
] | scidocsrr |
861754719a5b8722c1e900ffcce1da5c | Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings | [
{
"docid": "69d65a994d5b5c412ee6b8a266cb9b31",
"text": "This paper describes our system used in the Aspect Based Sentiment Analysis Task 4 at the SemEval-2014. Our system consists of two components to address two of the subtasks respectively: a Conditional Random Field (CRF) based classifier for Aspect Term Extraction (ATE) and a linear classifier for Aspect Term Polarity Classification (ATP). For the ATE subtask, we implement a variety of lexicon, syntactic and semantic features, as well as cluster features induced from unlabeled data. Our system achieves state-of-the-art performances in ATE, ranking 1st (among 28 submissions) and 2rd (among 27 submissions) for the restaurant and laptop domain respectively.",
"title": ""
},
{
"docid": "03b3d8220753570a6b2f21916fe4f423",
"text": "Recent systems have been developed for sentiment classification, opinion recogni tion, and opinion analysis (e.g., detect ing polarity and strength). We pursue an other aspect of opinion analysis: identi fying the sources of opinions, emotions, and sentiments. We view this problem as an information extraction task and adopt a hybrid approach that combines Con ditional Random Fields (Lafferty et al., 2001) and a variation of AutoSlog (Riloff, 1996a). While CRFs model source iden tification as a sequence tagging task, Au toSlog learns extraction patterns. Our re sults show that the combination of these two methods performs better than either one alone. The resulting system identifies opinion sources with precision and recall using a head noun matching measure, and precision and recall using an overlap measure.",
"title": ""
},
{
"docid": "09df260d26638f84ec3bd309786a8080",
"text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/",
"title": ""
}
] | [
{
"docid": "7ccbb730f1ce8eca687875c632520545",
"text": "Increasing cost of the fertilizers with lesser nutrient use efficiency necessitates alternate means to fertilizers. Soil is a storehouse of nutrients and energy for living organisms under the soil-plant-microorganism system. These rhizospheric microorganisms are crucial components of sustainable agricultural ecosystems. They are involved in sustaining soil as well as crop productivity under organic matter decomposition, nutrient transformations, and biological nutrient cycling. The rhizospheric microorganisms regulate the nutrient flow in the soil through assimilating nutrients, producing biomass, and converting organically bound forms of nutrients. Soil microorganisms play a significant role in a number of chemical transformations of soils and thus, influence the availability of macroand micronutrients. Use of plant growth-promoting microorganisms (PGPMs) helps in increasing yields in addition to conventional plant protection. The most important PGPMs are Azospirillum, Azotobacter, Bacillus subtilis, B. mucilaginosus, B. edaphicus, B. circulans, Paenibacillus spp., Acidithiobacillus ferrooxidans, Pseudomonas, Burkholderia, potassium, phosphorous, zinc-solubilizing V.S. Meena (*) Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India Indian Council of Agricultural Research – Vivekananda Institute of Hill Agriculture, Almora 263601, Uttarakhand, India e-mail: [email protected]; [email protected] I. Bahadur • B.R. Maurya Department of Soil Science and Agricultural Chemistry, Institute of Agricultural Sciences, Banaras Hindu University, Varanasi 221005, Uttar Pradesh, India A. Kumar Department of Botany, MMV, Banaras Hindu University, Varanasi 221005, India R.K. Meena Department of Plant Sciences, School of Life Sciences, University of Hyderabad, Hyderabad 500046, TG, India S.K. Meena Division of Soil Science and Agricultural Chemistry, Indian Agriculture Research Institute, New Delhi 110012, India J.P. Verma Institute of Environment and Sustainable Development, Banaras Hindu University, Varanasi 22100, Uttar Pradesh, India # Springer India 2016 V.S. Meena et al. (eds.), Potassium Solubilizing Microorganisms for Sustainable Agriculture, DOI 10.1007/978-81-322-2776-2_1 1 microorganisms, or SMART microbes; these are eco-friendly and environmentally safe. The rhizosphere is the important area of soil influenced by plant roots. It is composed of huge microbial populations that are somehow different from the rest of the soil population, generally denominated as the “rhizosphere effect.” The rhizosphere is the small region of soil that is immediately near to the root surface and also affected by root exudates.",
"title": ""
},
{
"docid": "546ce79bcfa2c2c456036e864a7162f8",
"text": "The estimation of effort involved in developing a software product plays an important role in determining the success or failure of the product. Project managers require a reliable approach for software effort estimation. It is especially important during the early stage of the software development life cycle. An accurate software effort estimation is a major concern in current industries. In this paper, the main goal is to estimate the effort required to develop various software projects using class point approach. Then optimization of the effort parameters is achieved using adaptive regression based Multi-Layer Perceptron (ANN) technique to obtain better accuracy. Furthermore, a comparative analysis of software effort estimation using Multi-Layer Perceptron (ANN) and Radial Basis Function Network (RBFN) has been provided. By estimating the software projects accurately, we can have softwares with acceptable quality within budget and on planned schedules.",
"title": ""
},
{
"docid": "b8ea508a39c9ff83cd663f4a0d68c283",
"text": "For decades—even prior to its inception—AI has aroused both fear and excitement as humanity has contemplated creating machines like ourselves. Unfortunately, the misconception that “intelligent” artifacts should necessarily be human-like has largely blinded society to the fact that we have been achieving AI for some time. Although AI that surpasses human ability grabs headlines (think of Watson, Deep Mind, or alphaGo), AI has been a standard part of the industrial repertoire since at least the 1980s, with expert systems checking circuit boards and credit card transactions. Machine learning (ML) strategies for generating AI have also long been used, such as genetic algorithms for nding solutions to intractable computational problems like scheduling, and neural networks not only to model and understand human learning but also for basic industrial control, monitoring, and classi cation. In the 1990s, probabilistic and Bayesian methods revolutionized ML and opened the door to one of the most pervasive AI abilities now available: searching through massive troves of data. Innovations in AI and ML algorithms have extended our capacity to nd information in texts, allowing us to search photographs as well as both recorded and live video and audio. We can translate, transcribe, read lips, read emotions (including lying), forge signatures and other handwriting, and forge video. Yet, the downside of these bene ts is ever present. As we write this, allegations are circulating that the Standardizing Ethical Design for Artifi cial Intelligence and Autonomous Systems",
"title": ""
},
{
"docid": "457f10c4c5d5b748a4f35abd89feb519",
"text": "Document image binarization is an important step in the document image analysis and recognition pipeline. H-DIBCO 2014 is the International Document Image Binarization Competition which is dedicated to handwritten document images organized in conjunction with ICFHR 2014 conference. The objective of the contest is to identify current advances in handwritten document image binarization using meaningful evaluation performance measures. This paper reports on the contest details including the evaluation measures used as well as the performance of the 7 submitted methods along with a short description of each method.",
"title": ""
},
{
"docid": "c64b13db5a4c35861b06ec53c5c73946",
"text": "In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.",
"title": ""
},
{
"docid": "106f80b025d0f48cb80718bc82573961",
"text": "The region-based Convolutional Neural Network (CNN) detectors such as Faster R-CNN or R-FCN have already shown promising results for object detection by combining the region proposal subnetwork and the classification subnetwork together. Although R-FCN has achieved higher detection speed while keeping the detection performance, the global structure information is ignored by the position-sensitive score maps. To fully explore the local and global properties, in this paper, we propose a novel fully convolutional network, named as CoupleNet, to couple the global structure with local parts for object detection. Specifically, the object proposals obtained by the Region Proposal Network (RPN) are fed into the the coupling module which consists of two branches. One branch adopts the position-sensitive RoI (PSRoI) pooling to capture the local part information of the object, while the other employs the RoI pooling to encode the global and context information. Next, we design different coupling strategies and normalization ways to make full use of the complementary advantages between the global and local branches. Extensive experiments demonstrate the effectiveness of our approach. We achieve state-of-the-art results on all three challenging datasets, i.e. a mAP of 82.7% on VOC07, 80.4% on VOC12, and 34.4% on COCO. Codes will be made publicly available1.",
"title": ""
},
{
"docid": "f032d36e081d2b5a4b0408b8f9b77954",
"text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.",
"title": ""
},
{
"docid": "a00201271997f398ec8e5eb4160fbe2e",
"text": "We present a hybrid algorithm for detection and tracking of text in natural scenes that goes beyond the full-detection approaches in terms of time performance optimization. A state-of-the-art scene text detection module based on Maximally Stable Extremal Regions (MSER) is used to detect text asynchronously, while on a separate thread detected text objects are tracked by MSER propagation. The cooperation of these two modules yields real time video processing at high frame rates even on low-resource devices.",
"title": ""
},
{
"docid": "5215c4302ac93191dca1e8993f2ceac9",
"text": "This paper presents the results of the WMT10 and MetricsMATR10 shared tasks,1 which included a translation task, a system combination task, and an evaluation task to investigate new MT metrics. We conducted a large-scale manual evaluation of 104 machine translation systems and 41 system combination entries. We used the ranking of these systems to measure how strongly automatic metrics correlate with human judgments of translation quality for 26 metrics. This year we also investigated increasing the number of human judgments by hiring non-expert annotators through Amazon’s Mechanical Turk.",
"title": ""
},
{
"docid": "a5e4199c16668f66656474f4eeb5d663",
"text": "Advances in information technology, particularly in the e-business arena, are enabling firms to rethink their supply chain strategies and explore new avenues for inter-organizational cooperation. However, an incomplete understanding of the value of information sharing and physical flow coordination hinder these efforts. This research attempts to help fill these gaps by surveying prior research in the area, categorized in terms of information sharing and flow coordination. We conclude by highlighting gaps in the current body of knowledge and identifying promising areas for future research. Subject Areas: e-Business, Inventory Management, Supply Chain Management, and Survey Research.",
"title": ""
},
{
"docid": "afbd52acb39600e8a0804f2140ebf4fc",
"text": "This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationallyweak one. Bywrapping the C++ library in Java container and by capitalizing on a Java-based offloading infrastructure that supports both CPU and GPGPU computations, we are able to establish automatically the required serverclient workflow that best addresses the resource allocation problem in the effort to execute from the weak workstation. As a result, the weak workstation can perform well at the task, despite lacking the sufficient hardware to do the required computations locally. This is achieved by offloading computations which rely on GPGPU, to the powerful workstation, across the network that connects them. We show the edge-based computation challenges associated with the information flow of the ported algorithm, demonstrate how we cope with them, and identify what needs to be improved for achieving even better performance.",
"title": ""
},
{
"docid": "18aa08888e4b2b412f154e47891b034d",
"text": "Roughly 1.3 billion people in developing countries still live without access to reliable electricity. As expanding access using current technologies will accelerate global climate change, there is a strong need for novel solutions that displace fossil fuels and are financially viable for developing regions. A novel DC microgrid solution that is geared at maximizing efficiency and reducing system installation cost is described in this paper. Relevant simulation and experimental results, as well as a proposal for undertaking field-testing of the technical and economic viability of the microgrid system are presented.",
"title": ""
},
{
"docid": "863202feb1410b177c6bb10ccc1fa43d",
"text": "Multimedia retrieval plays an indispensable role in big data utilization. Past efforts mainly focused on single-media retrieval. However, the requirements of users are highly flexible, such as retrieving the relevant audio clips with one query of image. So challenges stemming from the “media gap,” which means that representations of different media types are inconsistent, have attracted increasing attention. Cross-media retrieval is designed for the scenarios where the queries and retrieval results are of different media types. As a relatively new research topic, its concepts, methodologies, and benchmarks are still not clear in the literature. To address these issues, we review more than 100 references, give an overview including the concepts, methodologies, major challenges, and open issues, as well as build up the benchmarks, including data sets and experimental results. Researchers can directly adopt the benchmarks to promptly evaluate their proposed methods. This will help them to focus on algorithm design, rather than the time-consuming compared methods and results. It is noted that we have constructed a new data set XMedia, which is the first publicly available data set with up to five media types (text, image, video, audio, and 3-D model). We believe this overview will attract more researchers to focus on cross-media retrieval and be helpful to them.",
"title": ""
},
{
"docid": "b81b29c232fb9cb5dcb2dd7e31003d77",
"text": "Attendance and academic success are directly related in educational institutions. The continual absence of students in lecture, practical and tutorial is one of the major problems of decadence in the performance of academic. The authorized person needs to prohibit truancy for solving the problem. In existing system, the attendance is recorded by calling of the students’ name, signing on paper, using smart card and so on. These methods are easy to fake and to give proxy for the absence student. For solving inconvenience, fingerprint based attendance system with notification to guardian is proposed. The attendance is recorded using fingerprint module and stored it to the database via SD card. This system can calculate the percentage of attendance record monthly and store the attendance record in database for one year or more. In this system, attendance is recorded two times for one day and then it will also send alert message using GSM module if the attendance of students don’t have eight times for one week. By sending the alert message to the respective individuals every week, necessary actions can be done early. It can also reduce the cost of SMS charge and also have more attention for guardians. The main components of this system are Fingerprint module, Microcontroller, GSM module and SD card with SD card module. This system has been developed using Arduino IDE, Eclipse and MySQL Server.",
"title": ""
},
{
"docid": "c28b48557a4eda0d29200170435f2935",
"text": "An important role is reserved for nuclear imaging techniques in the imaging of neuroendocrine tumors (NETs). Somatostatin receptor scintigraphy (SRS) with (111)In-DTPA-octreotide is currently the most important tracer in the diagnosis, staging and selection for peptide receptor radionuclide therapy (PRRT). In the past decade, different positron-emitting tomography (PET) tracers have been developed. The largest group is the (68)Gallium-labeled somatostatin analogs ((68)Ga-SSA). Several studies have demonstrated their superiority compared to SRS in sensitivity and specificity. Furthermore, patient comfort and effective dose are favorable for (68)Ga-SSA. Other PET targets like β-[(11)C]-5-hydroxy-L-tryptophan ((11)C-5-HTP) and 6-(18)F-L-3,4-dihydroxyphenylalanine ((18)F-DOPA) were developed recently. For insulinomas, glucagon-like peptide-1 receptor imaging is a promising new technique. The evaluation of response after PRRT and other therapies is a challenge. Currently, the official follow-up is performed with radiological imaging techniques. The role of nuclear medicine may increase with the newest tracers for PET. In this review, the different nuclear imaging techniques and tracers for the imaging of NETs will be discussed.",
"title": ""
},
{
"docid": "da2f41ac808a5092eddf5edbcc12b94f",
"text": "The use of the social media sites are growing rapidly to interact with the communities and to share the ideas among others. It may happen that most of the people dislike the ideas of others person views and make the use of the offensive language in their posts. Due to these offensive terms, many people especially youth and teenagers try to adopt such language and spread over the social media sites which may significantly affect the others people innocent minds. As offensive terms increasingly use by the people in highly manner, it is difficult to find or classify such offensive terms in real day to day life. To overcome from these problem, the proposed system analyze the offensive language and can classify the offensive sentence on a particular topic discussion using the support vector machine (SVM) as supervised classification in the data mining. The proposed system also can find the potential user by means of whom the offensive language spread among others and define the comparative analysis of SVM with Naive Bayes technique.",
"title": ""
},
{
"docid": "58a2d35904f92d880ce40abbb2474873",
"text": "Recently, researchers in the artificial neural network field have focused their attention on connectionist models composed by several hidden layers. In fact, experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications, facing very complex problems, e.g., vision and human language understanding. However, the actual theoretical results supporting such a claim are still few and incomplete. In this paper, we propose a new approach to study how the depth of feedforward neural networks impacts on their ability in implementing high complexity functions. First, a new measure based on topological concepts is introduced, aimed at evaluating the complexity of the function implemented by a neural network, used for classification purposes. Then, deep and shallow neural architectures with common sigmoidal activation functions are compared, by deriving upper and lower bounds on their complexity, and studying how the complexity depends on the number of hidden units and the used activation function. The obtained results seem to support the idea that deep networks actually implements functions of higher complexity, so that they are able, with the same number of resources, to address more difficult problems.",
"title": ""
},
{
"docid": "e00295dc86476d1d350d11068439fe87",
"text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.",
"title": ""
},
{
"docid": "c0281e28801214c6f40ca46443f65c25",
"text": "Smart homes have become increasingly popular for IoT products and services with a lot of promises for improving the quality of life of individuals. Nevertheless, the heterogeneous, dynamic, and Internet-connected nature of this environment adds new concerns as private data becomes accessible, often without the householders' awareness. This accessibility alongside with the rising risks of data security and privacy breaches, makes smart home security a critical topic that deserves scrutiny. In this paper, we present an overview of the privacy and security challenges directed towards the smart home domain. We also identify constraints, evaluate solutions, and discuss a number of challenges and research issues where further investigation is required.",
"title": ""
},
{
"docid": "6275c7fcf34e7f596c8943330071369a",
"text": "0 7 4 0 7 4 5 9 / 0 0 / $ 1 0 . 0 0 © 2 0 0 1 I E E E While such techniques1 form the foundation for many contemporary software engineering practices, requirements analysis has to involve more than understanding and modeling the functions, data, and interfaces for a new system. In addition, the requirements engineer needs to explore alternatives and evaluate their feasibility and desirability with respect to business goals. For instance, suppose your task is to build a system to schedule meetings. First, you might want to explore whether the system should do most of the scheduling work or only record meetings. Then you might want to evaluate these requirements with respect to technical objectives (such as response time) and business objectives (such as meeting effectiveness, low costs, or system usability). Once you select an alternative to best meet overall objectives, you can further refine the meaning of terms such as “meeting,” “participant,” or “scheduling conflict.” You can also define the basic functions the system will support. The need to explore alternatives and evaluate them with respect to business objectives has led to research on goal-oriented analysis.2,3 We argue here that goal-oriented analysis complements and strengthens traditional requirements analysis techniques by offering a means for capturing and evaluating alternative ways of meeting business goals. The remainder of this article details the five main steps that comprise goal-oriented analysis. These steps include goal analysis, softgoal analysis, softgoal correlation analysis, goal correlation analysis, and evaluation of alterfeature",
"title": ""
}
] | scidocsrr |
f19ff2d7314f21753f9d3d73491716a5 | Bringing Deep Learning at the Edge of Information-Centric Internet of Things | [
{
"docid": "2c4babb483ddd52c9f1333cbe71a3c78",
"text": "The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.",
"title": ""
},
{
"docid": "08d1bc0a4e2caba4a399434f6600534c",
"text": "In view of evolving the Internet infrastructure, ICN is promoting a communication model that is fundamentally different from the traditional IP address-centric model. The ICN approach consists of the retrieval of content by (unique) names, regardless of origin server location (i.e., IP address), application, and distribution channel, thus enabling in-network caching/replication and content-based security. The expected benefits in terms of improved data dissemination efficiency and robustness in challenging communication scenarios indicate the high potential of ICN as an innovative networking paradigm in the IoT domain. IoT is a challenging environment, mainly due to the high number of heterogeneous and potentially constrained networked devices, and unique and heavy traffic patterns. The application of ICN principles in such a context opens new opportunities, while requiring careful design choices. This article critically discusses potential ways toward this goal by surveying the current literature after presenting several possible motivations for the introduction of ICN in the context of IoT. Major challenges and opportunities are also highlighted, serving as guidelines for progress beyond the state of the art in this timely and increasingly relevant topic.",
"title": ""
},
{
"docid": "1e4a86dcc05ff3d593a4bf7b88f8b23a",
"text": "Fog/edge computing has been proposed to be integrated with Internet of Things (IoT) to enable computing services devices deployed at network edge, aiming to improve the user’s experience and resilience of the services in case of failures. With the advantage of distributed architecture and close to end-users, fog/edge computing can provide faster response and greater quality of service for IoT applications. Thus, fog/edge computing-based IoT becomes future infrastructure on IoT development. To develop fog/edge computing-based IoT infrastructure, the architecture, enabling techniques, and issues related to IoT should be investigated first, and then the integration of fog/edge computing and IoT should be explored. To this end, this paper conducts a comprehensive overview of IoT with respect to system architecture, enabling technologies, security and privacy issues, and present the integration of fog/edge computing and IoT, and applications. Particularly, this paper first explores the relationship between cyber-physical systems and IoT, both of which play important roles in realizing an intelligent cyber-physical world. Then, existing architectures, enabling technologies, and security and privacy issues in IoT are presented to enhance the understanding of the state of the art IoT development. To investigate the fog/edge computing-based IoT, this paper also investigate the relationship between IoT and fog/edge computing, and discuss issues in fog/edge computing-based IoT. Finally, several applications, including the smart grid, smart transportation, and smart cities, are presented to demonstrate how fog/edge computing-based IoT to be implemented in real-world applications.",
"title": ""
}
] | [
{
"docid": "55631b81d46fc3dcaad8375176cb1c68",
"text": "UNLABELLED\nThe need for long-term retention to prevent post-treatment tooth movement is now widely accepted by orthodontists. This may be achieved with removable retainers or permanent bonded retainers. This article aims to provide simple guidance for the dentist on how to maintain and repair both removable and fixed retainers.\n\n\nCLINICAL RELEVANCE\nThe general dental practitioner is more likely to review patients over time and needs to be aware of the need for long-term retention and how to maintain and repair the retainers.",
"title": ""
},
{
"docid": "8ae1ef032c0a949aa31b3ca8bc024cb5",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "c13cbc9d7b4098cb392ba8293b692a37",
"text": "This paper introduces the first stiffness controller for continuum robots. The control law is based on an accurate approximation of a continuum robot's coupled kinematic and static force model. To implement a desired tip stiffness, the controller drives the actuators to positions corresponding to a deflected robot configuration that produces the required tip force for the measured tip position. This approach provides several important advantages. First, it enables the use of robot deflection sensing as a means to both sense and control tip forces. Second, it enables stiffness control to be implemented by modification of existing continuum robot position controllers. The proposed controller is demonstrated experimentally in the context of a concentric tube robot. Results show that the stiffness controller achieves the desired stiffness in steady state, provides good dynamic performance, and exhibits stability during contact transitions.",
"title": ""
},
{
"docid": "cd224f035982a669dcd8eb0c086a1be0",
"text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.",
"title": ""
},
{
"docid": "3ca057959a24245764953a6aa1b2ed84",
"text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.",
"title": ""
},
{
"docid": "636be5d5a0cc7dc4ab1906548cb53b31",
"text": "Feature selection is one of the techniques in machine learning for selecting a subset of relevant features namely variables for the construction of models. The feature selection technique aims at removing the redundant or irrelevant features or features which are strongly correlated in the data without much loss of information. It is broadly used for making the model much easier to interpret and increase generalization by reducing the variance. Regression analysis plays a vital role in statistical modeling and in turn for performing machine learning tasks. The traditional procedures such as Ordinary Least Squares (OLS) regression, Stepwise regression and partial least squares regression are very sensitive to random errors. Many alternatives have been established in the literature during the past few decades such as Ridge regression and LASSO and its variants. This paper explores the features of the popular regression methods, OLS regression, ridge regression and the LASSO regression. The performance of these procedures has been studied in terms of model fitting and prediction accuracy using real data and simulated environment with the help of R package.",
"title": ""
},
{
"docid": "c15bc15643075d75e24d81b237ed3f4c",
"text": "User authentication is a crucial service in wireless sensor networks (WSNs) that is becoming increasingly common in WSNs because wireless sensor nodes are typically deployed in an unattended environment, leaving them open to possible hostile network attack. Because wireless sensor nodes are limited in computing power, data storage and communication capabilities, any user authentication protocol must be designed to operate efficiently in a resource constrained environment. In this paper, we review several proposed WSN user authentication protocols, with a detailed review of the M.L Das protocol and a cryptanalysis of Das' protocol that shows several security weaknesses. Furthermore, this paper proposes an ECC-based user authentication protocol that resolves these weaknesses. According to our analysis of security of the ECC-based protocol, it is suitable for applications with higher security requirements. Finally, we present a comparison of security, computation, and communication costs and performances for the proposed protocols. The ECC-based protocol is shown to be suitable for higher security WSNs.",
"title": ""
},
{
"docid": "f925550d3830944b8649266292eae3fd",
"text": "In the recent years antenna design appears as a mature field of research. It really is not the fact because as the technology grows with new ideas, fitting expectations in the antenna design are always coming up. A Ku-band patch antenna loaded with notches and slit has been designed and simulated using Ansoft HFSS 3D electromagnetic simulation tool. Multi-frequency band operation is obtained from the proposed microstrip antenna. The design was carried out using Glass PTFE as the substrate and copper as antenna material. The designed antennas resonate at 15GHz with return loss over 50dB & VSWR less than 1, on implementing different slots in the radiating patch multiple frequencies resonate at 12.2GHz & 15.00GHz (Return Loss -27.5, -37.73 respectively & VSWR 0.89, 0.24 respectively) and another resonate at 11.16 GHz, 15.64GHz & 17.73 GHz with return loss -18.99, -23.026, -18.156 dB respectively and VSWR 1.95, 1.22 & 2.1 respectively. All the above designed band are used in the satellite application for non-geostationary orbit (NGSO) and fixed-satellite services (FSS) providers to operate in various segments of the Ku-band.",
"title": ""
},
{
"docid": "c2816721fa6ccb0d676f7fdce3b880d4",
"text": "Due to the achievements in the Internet of Things (IoT) field, Smart Objects are often involved in business processes. However, the integration of IoT with Business Process Management (BPM) is far from mature: problems related to process compliance and Smart Objects configuration with respect to the process requirements have not been fully addressed yet; also, the interaction of Smart Objects with multiple business processes that belong to different stakeholders is still under investigation. My PhD thesis aims to fill this gap by extending the BPM lifecycle, with particular focus on the design and analysis phase, in order to explicitly support IoT and its requirements.",
"title": ""
},
{
"docid": "d4e5a5aa65017360db9a87590a728892",
"text": "This work presents a chaotic path planning generator which is used in autonomous mobile robots, in order to cover a terrain. The proposed generator is based on a nonlinear circuit, which shows chaotic behavior. The bit sequence, produced by the chaotic generator, is converted to a sequence of planned positions, which satisfies the requirements for unpredictability and fast scanning of the entire terrain. The nonlinear circuit and the trajectory-planner are described thoroughly. Simulation tests confirm that with the proposed path planning generator better results can be obtained with regard to previous works. © 2012 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "e15405f1c0fb52be154e79a2976fbb6d",
"text": "The generalized Poisson regression model has been used to model dispersed count data. It is a good competitor to the negative binomial regression model when the count data is over-dispersed. Zero-inflated Poisson and zero-inflated negative binomial regression models have been proposed for the situations where the data generating process results into too many zeros. In this paper, we propose a zero-inflated generalized Poisson (ZIGP) regression model to model domestic violence data with too many zeros. Estimation of the model parameters using the method of maximum likelihood is provided. A score test is presented to test whether the number of zeros is too large for the generalized Poisson model to adequately fit the domestic violence data.",
"title": ""
},
{
"docid": "c283e7b1133fe0898e5d953c751d6d85",
"text": "Fasting has been practiced for millennia, but, only recently, studies have shed light on its role in adaptive cellular responses that reduce oxidative damage and inflammation, optimize energy metabolism, and bolster cellular protection. In lower eukaryotes, chronic fasting extends longevity, in part, by reprogramming metabolic and stress resistance pathways. In rodents intermittent or periodic fasting protects against diabetes, cancers, heart disease, and neurodegeneration, while in humans it helps reduce obesity, hypertension, asthma, and rheumatoid arthritis. Thus, fasting has the potential to delay aging and help prevent and treat diseases while minimizing the side effects caused by chronic dietary interventions.",
"title": ""
},
{
"docid": "8adb07a99940383139f0d4ed32f68f7c",
"text": "The gene ASPM (abnormal spindle-like microcephaly associated) is a specific regulator of brain size, and its evolution in the lineage leading to Homo sapiens was driven by strong positive selection. Here, we show that one genetic variant of ASPM in humans arose merely about 5800 years ago and has since swept to high frequency under strong positive selection. These findings, especially the remarkably young age of the positively selected variant, suggest that the human brain is still undergoing rapid adaptive evolution.",
"title": ""
},
{
"docid": "f81723af1cb8bf52b1348fe1f4d91d90",
"text": "The backpropagation (BP) algorithm is often thought to be biologically implausible in the brain. One of the main reasons is that BP requires symmetric weight matrices in the feedforward and feedback pathways. To address this “weight transport problem” (Grossberg, 1987), two more biologically plausible algorithms, proposed by Liao et al. (2016) and Lillicrap et al. (2016), relax BP’s weight symmetry requirements and demonstrate comparable learning capabilities to that of BP on small datasets. However, a recent study by Bartunov et al. (2018) evaluate variants of target-propagation (TP) and feedback alignment (FA) on MINIST, CIFAR, and ImageNet datasets, and find that although many of the proposed algorithms perform well on MNIST and CIFAR, they perform significantly worse than BP on ImageNet. Here, we additionally evaluate the sign-symmetry algorithm (Liao et al., 2016), which differs from both BP and FA in that the feedback and feedforward weights share signs but not magnitudes. We examine the performance of sign-symmetry and feedback alignment on ImageNet and MS COCO datasets using different network architectures (ResNet-18 and AlexNet for ImageNet, RetinaNet for MS COCO). Surprisingly, networks trained with sign-symmetry can attain classification performance approaching that of BP-trained networks. These results complement the study by Bartunov et al. (2018), and establish a new benchmark for future biologically plausible learning algorithms on more difficult datasets and more complex architectures. This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. ar X iv :1 81 1. 03 56 7v 2 [ cs .L G ] 2 5 N ov 2 01 8 BIOLOGICALLY-PLAUSIBLE LEARNING ALGORITHMS CAN SCALE TO LARGE DATASETS",
"title": ""
},
{
"docid": "2e6623aa13ca5a047d888612c9a8e22a",
"text": "We present a hydro-elastic actuator that has a linear spring intentionally placed in series between the hydraulic piston and actuator output. The spring strain is measured to get an accurate estimate of force. This measurement alone is used in PI feedback to control the force in the actuator. The spring allows for high force fidelity, good force control, minimum impedance, and large dynamic range. A third order linear actuator model is broken into two fundamental cases: fixed load – high force (forward transfer function), and free load – zero force (impedance). These two equations completely describe the linear characteristics of the actuator. This model is presented with dimensional analysis to allow for generalization. A prototype actuator that demonstrates force control and low impedance is also presented. Dynamic analysis of the prototype actuator correlates well with the linear mathematical model. This work done with hydraulics is an extension from previous work done with electro-mechanical actuators. Keywords— Series Elastic Actuator, Force Control, Hydraulic Force Control, Biomimetic Robots",
"title": ""
},
{
"docid": "cf5e440f064656488506d90285c7885d",
"text": "A key issue in delay tolerant networks (DTN) is to find the right node to store and relay messages. We consider messages annotated with the unique keywords describing themessage subject, and nodes also adds keywords to describe their mission interests, priority and their transient social relationship (TSR). To offset resource costs, an incentive mechanism is developed over transient social relationships which enrich enroute message content and motivate better semantically related nodes to carry and forward messages. The incentive mechanism ensures avoidance of congestion due to uncooperative or selfish behavior of nodes.",
"title": ""
},
{
"docid": "6c08b5b172d2d322734bab615b005ab4",
"text": "Inelastic collisions between the galactic cosmic rays (GCRs) and the interstellar medium (ISM) are responsible for producing essentially all of the light elements Li, Be, and B (LiBeB) observed in the cosmic rays. Previous calculations (e.g., [1]) have shown that GCR fragmentation can explain the bulk of the existing LiBeB abundance in the present day Galaxy. However, elemental abundances of LiBeB in old halo stars indicate inconsistencies with this explanation. We have used a simple leaky-box model to predict the cosmic-ray elemental and isotopic abundances of LiBeB in the present epoch. We conducted a survey of recent scientific literature on fragmentation cross sections and have calculated the amount of uncertainty they introduce into our model. The predicted particle intensities of this model were compared with high energy (EisM=200-500 MeV/nucleon) cosmic-ray data from the Cosmic Ray Isotope Spectrometer (CRIS), which indicates fairly good agreement with absolute fluxes for Z?:. 5 and relative isotopic abundances for all LiBeB species.",
"title": ""
},
{
"docid": "5cb8b8d4c228d0f75543ae1b4d5a0e5c",
"text": "Clustering is an important data mining task for exploration and visualization of different data types like news stories, scientific publications, weblogs, etc. Due to the evolving nature of these data, evolutionary clustering, also known as dynamic clustering, has recently emerged to cope with the challenges of mining temporally smooth clusters over time. A good evolutionary clustering algorithm should be able to fit the data well at each time epoch, and at the same time results in a smooth cluster evolution that provides the data analyst with a coherent and easily interpretable model. In this paper we introduce the temporal Dirichlet process mixture model (TDPM) as a framework for evolutionary clustering. TDPM is a generalization of the DPM framework for clustering that automatically grows the number of clusters with the data. In our framework, the data is divided into epochs; all data points inside the same epoch are assumed to be fully exchangeable, whereas the temporal order is maintained across epochs. Moreover, The number of clusters in each epoch is unbounded: the clusters can retain, die out or emerge over time, and the actual parameterization of each cluster can also evolve over time in a Markovian fashion. We give a detailed and intuitive construction of this framework using the recurrent Chinese restaurant process (RCRP) metaphor, as well as a Gibbs sampling algorithm to carry out posterior inference in order to determine the optimal cluster evolution. We demonstrate our model over simulated data by using it to build an infinite dynamic mixture of Gaussian factors, and over real dataset by using it to build a simple non-parametric dynamic clustering-topic model and apply it to analyze the NIPS12 document collection.",
"title": ""
},
{
"docid": "ab23f66295574368ccd8fc4e1b166ecc",
"text": "Although the educational level of the Portuguese population has improved in the last decades, the statistics keep Portugal at Europe’s tail end due to its high student failure rates. In particular, lack of success in the core classes of Mathematics and the Portuguese language is extremely serious. On the other hand, the fields of Business Intelligence (BI)/Data Mining (DM), which aim at extracting high-level knowledge from raw data, offer interesting automated tools that can aid the education domain. The present work intends to approach student achievement in secondary education using BI/DM techniques. Recent real-world data (e.g. student grades, demographic, social and school related features) was collected by using school reports and questionnaires. The two core classes (i.e. Mathematics and Portuguese) were modeled under binary/five-level classification and regression tasks. Also, four DM models (i.e. Decision Trees, Random Forest, Neural Networks and Support Vector Machines) and three input selections (e.g. with and without previous grades) were tested. The results show that a good predictive accuracy can be achieved, provided that the first and/or second school period grades are available. Although student achievement is highly influenced by past evaluations, an explanatory analysis has shown that there are also other relevant features (e.g. number of absences, parent’s job and education, alcohol consumption). As a direct outcome of this research, more efficient student prediction tools can be be developed, improving the quality of education and enhancing school resource management.",
"title": ""
},
{
"docid": "7bb17491cb10db67db09bc98aba71391",
"text": "This paper presents a constrained backpropagation (CPROP) methodology for solving nonlinear elliptic and parabolic partial differential equations (PDEs) adaptively, subject to changes in the PDE parameters or external forcing. Unlike existing methods based on penalty functions or Lagrange multipliers, CPROP solves the constrained optimization problem associated with training a neural network to approximate the PDE solution by means of direct elimination. As a result, CPROP reduces the dimensionality of the optimization problem, while satisfying the equality constraints associated with the boundary and initial conditions exactly, at every iteration of the algorithm. The effectiveness of this method is demonstrated through several examples, including nonlinear elliptic and parabolic PDEs with changing parameters and nonhomogeneous terms.",
"title": ""
}
] | scidocsrr |
8682983d0f8b0c24bec9756a7d875b17 | Relative localization and communication module for small-scale multi-robot systems | [
{
"docid": "1e6cec12054c46442819f9595d07ae09",
"text": "Most of the research in the field of robotics is focussed on solving the problem of Simultaneous Localization and Mapping(SLAM). In general the problem is solved using a single robot. In the article written by R. Grabowski, C. Paredis and P. Hkosla, called ”Heterogeneous Teams of Modular Robots for Mapping end Exploration” a novel localization method is presented based on multiple robots.[Grabowski, 2000] For this purpose the relative distance between the different robots is calculated. These measurements, together with the positions estimated using dead reckoning, are used to determine the most likely new positions of the agents. Knowing the positions is essential when pursuing accurate (team) mapping capabilities. The proposed method makes it possible for heterogeneous team of modular centimeter-scale robots to collaborate and map unexplored environments.",
"title": ""
}
] | [
{
"docid": "5e3575b45ffaeb2587d7e6531609bd1c",
"text": "These last years, several new home automation boxes appeared on the market, the new radio-based protocols facilitating their deployment with respect to previously wired solutions. Coupled with the wider availability of connected objects, these protocols have allowed new users to set up home automation systems by themselves. In this paper, we relate an in situ observational study of these builders in order to understand why and how the smart habitats were developed and used. We led 10 semi-structured interviews in households composed of at least 2 adults and equipped for at least 1 year, and 47 home automation builders answered an online questionnaire at the end of the study. Our study confirms, specifies and exhibits additional insights about usages and means of end-user development in the context of home automation.",
"title": ""
},
{
"docid": "fa05d004df469e8f83fa4fdee9909a6f",
"text": "Accurate velocity estimation is an important basis for robot control, but especially challenging for highly elastically driven robots. These robots show large swing or oscillation effects if they are not damped appropriately during the performed motion. In this letter, we consider an ultralightweight tendon-driven series elastic robot arm equipped with low-resolution joint position encoders. We propose an adaptive Kalman filter for velocity estimation that is suitable for these kinds of robots with a large range of possible velocities and oscillation frequencies. Based on an analysis of the parameter characteristics of the measurement noise variance, an update rule based on the filter position error is developed that is easy to adjust for use with different sensors. Evaluation of the filter both in simulation and in robot experiments shows a smooth and accurate performance, well suited for control purposes.",
"title": ""
},
{
"docid": "d52bfde050e6535645c324e7006a50e7",
"text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.",
"title": ""
},
{
"docid": "baefc6e7e7968651f3e36acfd62b094d",
"text": "The task of paraphrasing is inherently familiar to speakers of all languages. Moreover, the task of automatically generating or extracting semantic equivalences for the various units of language—words, phrases, and sentences—is an important part of natural language processing (NLP) and is being increasingly employed to improve the performance of several NLP applications. In this article, we attempt to conduct a comprehensive and application-independent survey of data-driven phrasal and sentential paraphrase generation methods, while also conveying an appreciation for the importance and potential use of paraphrases in the field of NLP research. Recent work done in manual and automatic construction of paraphrase corpora is also examined. We also discuss the strategies used for evaluating paraphrase generation techniques and briefly explore some future trends in paraphrase generation.",
"title": ""
},
{
"docid": "c7c63f08639660f935744309350ab1e0",
"text": "A composite of graphene oxide supported by needle-like MnO(2) nanocrystals (GO-MnO(2) nanocomposites) has been fabricated through a simple soft chemical route in a water-isopropyl alcohol system. The formation mechanism of these intriguing nanocomposites investigated by transmission electron microscopy and Raman and ultraviolet-visible absorption spectroscopy is proposed as intercalation and adsorption of manganese ions onto the GO sheets, followed by the nucleation and growth of the crystal species in a double solvent system via dissolution-crystallization and oriented attachment mechanisms, which in turn results in the exfoliation of GO sheets. Interestingly, it was found that the electrochemical performance of as-prepared nanocomposites could be enhanced by the chemical interaction between GO and MnO(2). This method provides a facile and straightforward approach to deposit MnO(2) nanoparticles onto the graphene oxide sheets (single layer of graphite oxide) and may be readily extended to the preparation of other classes of hybrids based on GO sheets for technological applications.",
"title": ""
},
{
"docid": "b5bb280c7ce802143a86b9261767d9a6",
"text": "Existing person re-identification (re-id) benchmarks and algorithms mainly focus on matching cropped pedestrian images between queries and candidates. However, it is different from real-world scenarios where the annotations of pedestrian bounding boxes are unavailable and the target person needs to be found from whole images. To close the gap, we investigate how to localize and match query persons from the scene images without relying on the annotations of candidate boxes. Instead of breaking it down into two separate tasks—pedestrian detection and person re-id, we propose an end-to-end deep learning framework to jointly handle both tasks. A random sampling softmax loss is proposed to effectively train the model under the supervision of sparse and unbalanced labels. On the other hand, existing benchmarks are small in scale and the samples are collected from a few fixed camera views with low scene diversities. To address this issue, we collect a largescale and scene-diversified person search dataset, which contains 18,184 images, 8,432 persons, and 99,809 annotated bounding boxes1. We evaluate our approach and other baselines on the proposed dataset, and study the influence of various factors. Experiments show that our method achieves the best result.",
"title": ""
},
{
"docid": "0195e112c19f512b7de6a7f00e9f1099",
"text": "Medication-related osteonecrosis of the jaw (MRONJ) is a severe adverse drug reaction, consisting of progressive bone destruction in the maxillofacial region of patients. ONJ can be caused by two pharmacological agents: Antiresorptive (including bisphosphonates (BPs) and receptor activator of nuclear factor kappa-B ligand inhibitors) and antiangiogenic. MRONJ pathophysiology is not completely elucidated. There are several suggested hypothesis that could explain its unique localization to the jaws: Inflammation or infection, microtrauma, altered bone remodeling or over suppression of bone resorption, angiogenesis inhibition, soft tissue BPs toxicity, peculiar biofilm of the oral cavity, terminal vascularization of the mandible, suppression of immunity, or Vitamin D deficiency. Dental screening and adequate treatment are fundamental to reduce the risk of osteonecrosis in patients under antiresorptive or antiangiogenic therapy, or before initiating the administration. The treatment of MRONJ is generally difficult and the optimal therapy strategy is still to be established. For this reason, prevention is even more important. It is suggested that a multidisciplinary team approach including a dentist, an oncologist, and a maxillofacial surgeon to evaluate and decide the best therapy for the patient. The choice between a conservative treatment and surgery is not easy, and it should be made on a case by case basis. However, the initial approach should be as conservative as possible. The most important goals of treatment for patients with established MRONJ are primarily the control of infection, bone necrosis progression, and pain. The aim of this paper is to represent the current knowledge about MRONJ, its preventive measures and management strategies.",
"title": ""
},
{
"docid": "799bc245ecfabf59416432ab62fe9320",
"text": "This study examines resolution skills in phishing email detection, defined as the abilities of individuals to discern correct judgments from incorrect judgments in probabilistic decisionmaking. An illustration of the resolution skills is provided. A number of antecedents to resolution skills in phishing email detection, including familiarity with the sender, familiarity with the email, online transaction experience, prior victimization of phishing attack, perceived selfefficacy, time to judgment, and variability of time in judgments, are examined. Implications of the study are further discussed.",
"title": ""
},
{
"docid": "3e142a338a98e3a3c9a65fea07473cf8",
"text": "In this paper we develop a new family of Ordered Weighted Averaging (OWA) operators. Weight vector is obtained from a desired orness of the operator. Using Faulhaber’s formulas we obtain direct and simple expressions for the weight vector without any iteration loop. With the exception of one weight, the remaining follow a straight line relation. As a result, a fast and robust algorithm is developed. The resulting weight vector is suboptimal according with the Maximum Entropy criterion, but it is very close to the optimal. Comparisons are done with other procedures.",
"title": ""
},
{
"docid": "122ed18a623510052664996c7ef4b4bb",
"text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding",
"title": ""
},
{
"docid": "914f41b9f3c0d74f888c7dd83e226468",
"text": "We present a new algorithm for inferring the home location of Twitter users at different granularities, including city, state, time zone, or geographic region, using the content of users’ tweets and their tweeting behavior. Unlike existing approaches, our algorithm uses an ensemble of statistical and heuristic classifiers to predict locations and makes use of a geographic gazetteer dictionary to identify place-name entities. We find that a hierarchical classification approach, where time zone, state, or geographic region is predicted first and city is predicted next, can improve prediction accuracy. We have also analyzed movement variations of Twitter users, built a classifier to predict whether a user was travelling in a certain period of time, and use that to further improve the location detection accuracy. Experimental evidence suggests that our algorithm works well in practice and outperforms the best existing algorithms for predicting the home location of Twitter users.",
"title": ""
},
{
"docid": "6db790d4d765b682fab6270c5930bead",
"text": "Geophysical applications of radar interferometry to measure changes in the Earth's surface have exploded in the early 1990s. This new geodetic technique calculates the interference pattern caused by the difference in phase between two images acquired by a spaceborne synthetic aperture radar at two distinct times. The resulting interferogram is a contour map of the change in distance between the ground and the radar instrument. These maps provide an unsurpassed spatial sampling density (---100 pixels km-2), a competitive precision (---1 cm), and a useful observation cadence (1 pass month-•). They record movements in the crust, perturbations in the atmosphere, dielectric modifications in the soil, and relief in the topography. They are also sensitive to technical effects, such as relative variations in the radar's trajectory or variations in its frequency standard. We describe how all these phenomena contribute to an interferogram. Then a practical summary explains the techniques for calculating and manipulating interferograms from various radar instruments, including the four satellites currently in orbit: ERS-1, ERS-2, JERS-1, and RADARSAT. The next chapter suggests some guidelines for interpreting an interferogram as a geophysical measurement: respecting the limits of the technique, assessing its uncertainty, recognizing artifacts, and discriminating different types of signal. We then review the geophysical applications published to date, most of which study deformation related to earthquakes, volcanoes, and glaciers using ERS-1 data. We also show examples of monitoring natural hazards and environmental alterations related to landslides, subsidence, and agriculture. In addition, we consider subtler geophysical signals such as postseismic relaxation, tidal loading of coastal areas, and interseismic strain accumulation. We conclude with our perspectives on the future of radar interferometry. The objective of the review is for the reader to develop the physical understanding necessary to calculate an interferogram and the geophysical intuition necessary to interpret it.",
"title": ""
},
{
"docid": "03dcb05a6aa763b6b0a5cdc58ddb81d8",
"text": "In this paper, a phase-shifted dual H-bridge converter, which can solve the drawbacks of existing phase-shifted full-bridge converters such as narrow zero-voltage-switching (ZVS) range, large circulating current, large duty-cycle loss, and serious secondary-voltage overshoot and oscillation, is analyzed and evaluated. The proposed topology is composed of two symmetric half-bridge inverters that are placed in parallel on the primary side and are driven in a phase-shifting manner to regulate the output voltage. At the rectifier stage, a center-tap-type rectifier with two additional low-current-rated diodes is employed. This structure allows the proposed converter to have the advantages of a wide ZVS range, no problems related to duty-cycle loss, no circulating current, and the reduction of secondary-voltage oscillation and overshoot. Moreover, the output filter's size becomes smaller compared to the conventional phase-shift full-bridge converters. This paper describes the operation principle of the proposed converter and the analysis and design consideration in depth. A 1-kW 320-385-V input 50-V output laboratory prototype operating at a 100-kHz switching frequency is designed, built, and tested to verify the effectiveness of the presented converter.",
"title": ""
},
{
"docid": "39fc05dfc0faeb47728b31b6053c040a",
"text": "Attempted and completed self-enucleation, or removal of one's own eyes, is a rare but devastating form of self-mutilation behavior. It is often associated with psychiatric disorders, particularly schizophrenia, substance induced psychosis, and bipolar disorder. We report a case of a patient with a history of bipolar disorder who gouged his eyes bilaterally as an attempt to self-enucleate himself. On presentation, the patient was manic with both psychotic features of hyperreligous delusions and command auditory hallucinations of God telling him to take his eyes out. On presentation, the patient had no light perception vision in both eyes and his exam displayed severe proptosis, extensive conjunctival lacerations, and visibly avulsed extraocular muscles on the right side. An emergency computed tomography scan of the orbits revealed small and irregular globes, air within the orbits, and intraocular hemorrhage. He was taken to the operating room for surgical repair of his injuries. Attempted and completed self-enucleation is most commonly associated with schizophrenia and substance induced psychosis, but can also present in patients with bipolar disorder. Other less commonly associated disorders include obsessive-compulsive disorder, depression, mental retardation, neurosyphilis, Lesch-Nyhan syndrome, and structural brain lesions.",
"title": ""
},
{
"docid": "b17f5cfea81608e5034121113dbc8de4",
"text": "Every question asked by a therapist may be seen to embody some intent and to arise from certain assumptions. Many questions are intended to orient the therapist to the client's situation and experiences; others are asked primarily to provoke therapeutic change. Some questions are based on lineal assumptions about the phenomena being addressed; others are based on circular assumptions. The differences among these questions are not trivial. They tend to have dissimilar effects. This article explores these issues and offers a framework for distinguishing four major groups of questions. The framework may be used by therapists to guide their decision making about what kinds of questions to ask, and by researchers to study different interviewing styles.",
"title": ""
},
{
"docid": "a520bf66f1b54a7444f2cbe3f2da8000",
"text": "In this work we study the problem of Intrusion Detection is sensor networks and we propose a lightweight scheme that can be applied to such networks. Its basic characteristic is that nodes monitor their neighborhood and collaborate with their nearest neighbors to bring the network back to its normal operational condition. We emphasize in a distributed approach in which, even though nodes don’t have a global view, they can still detect an intrusion and produce an alert. We apply our design principles for the blackhole and selective forwarding attacks by defining appropriate rules that characterize malicious behavior. We also experimentally evaluate our scheme to demonstrate its effectiveness in detecting the afore-mentioned attacks.",
"title": ""
},
{
"docid": "b206a5f5459924381ef6c46f692c7052",
"text": "The Konstanz Information Miner is a modular environment, which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables simple integration of new algorithms and tools as well as data manipulation or visualization methods in the form of new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture, briey sketch how new nodes can be incorporated, and highlight some of the new features of version 2.0.",
"title": ""
},
{
"docid": "79b73417f1f09e6487ea0c9ead28098b",
"text": "The internet connectivity of client software (e.g., apps running on phones and PCs), web sites, and online services provide an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called A/B tests, split tests, randomized experiments, control/treatment tests, and online field experiments. Unlike most data mining techniques for finding correlational patterns, controlled experiments allow establishing a causal relationship with high probability. Experimenters can utilize the Scientific Method to form a hypothesis of the form “If a specific change is introduced, will it improve key metrics?” and evaluate it with real users. The theory of a controlled experiment dates back to Sir Ronald A. Fisher’s experiments at the Rothamsted Agricultural Experimental Station in England in the 1920s, and the topic of offline experiments is well developed in Statistics (Box 2005). Online Controlled Experiments started to be used in the late 1990s with the growth of the Internet. Today, many large sites, including Amazon, Bing, Facebook, Google, LinkedIn, and Yahoo! run thousands to tens of thousands of experiments each year testing user interface (UI) changes, enhancements to algorithms (search, ads, personalization, recommendation, etc.), changes to apps, content management system, etc. Online controlled experiments are now considered an indispensable tool, and their use is growing for startups and smaller websites. Controlled experiments are especially useful in combination with Agile software development (Martin 2008, Rubin 2012), Steve Blank’s Customer Development process (Blank 2005), and MVPs (Minimum Viable Products) popularized by Eric Ries’s Lean Startup (Ries 2011). Motivation and Background Many good resources are available with motivation and explanations about online controlled experiments (Siroker and Koomen 2013, Goward 2012, McFarland 2012, Schrage 2014, Kohavi, Longbotham and Sommerfield, et al. 2009, Kohavi, Deng and Longbotham, et al. 2014, Kohavi, Deng and Frasca, et al. 2013).",
"title": ""
},
{
"docid": "c27e6b7be1a5d00632bbbea64b2516ad",
"text": "Block diagonalization (BD) is a well-known precoding method in multiuser multi-input multi-output (MIMO) broadcast channels. This scheme can be considered as a extension of the zero-forcing (ZF) channel inversion to the case where each receiver is equipped with multiple antennas. One of the limitation of the BD is that the sum rate does not grow linearly with the number of users and transmit antennas at low and medium signal-to-noise ratio regime, since the complete suppression of multi-user interference is achieved at the expense of noise enhancement. Also it performs poorly under imperfect channel state information. In this paper, we propose a generalized minimum mean-squared error (MMSE) channel inversion algorithm for users with multiple antennas to overcome the drawbacks of the BD for multiuser MIMO systems. We first introduce a generalized ZF channel inversion algorithm as a new approach of the conventional BD. Applying this idea to the MMSE channel inversion for identifying orthonormal basis vectors of the precoder, and employing the MMSE criterion for finding its combining matrix, the proposed scheme increases the signal-to-interference-plus-noise ratio at each user's receiver. Simulation results confirm that the proposed scheme exhibits a linear growth of the sum rate, as opposed to the BD scheme. For block fading channels with four transmit antennas, the proposed scheme provides a 3 dB gain over the conventional BD scheme at 1% frame error rate. Also, we present a modified precoding method for systems with channel estimation errors and show that the proposed algorithm is robust to channel estimation errors.",
"title": ""
},
{
"docid": "9200498e7ef691b83bf804d4c5581ba2",
"text": "Mobile computer-vision technology will soon become as ubiquitous as touch interfaces.",
"title": ""
}
] | scidocsrr |
71da5b2e542c147f90c0ceaa1a557ac5 | Features for Masking-Based Monaural Speech Separation in Reverberant Conditions | [
{
"docid": "44c9de5fbaac78125277a9995890b43c",
"text": "In the real world, speech is usually distorted by both reverberation and background noise. In such conditions, speech intelligibility is degraded substantially, especially for hearing-impaired (HI) listeners. As a consequence, it is essential to enhance speech in the noisy and reverberant environment. Recently, deep neural networks have been introduced to learn a spectral mapping to enhance corrupted speech, and shown significant improvements in objective metrics and automatic speech recognition score. However, listening tests have not yet shown any speech intelligibility benefit. In this paper, we propose to enhance the noisy and reverberant speech by learning a mapping to reverberant target speech rather than anechoic target speech. A preliminary listening test was conducted, and the results show that the proposed algorithm is able to improve speech intelligibility of HI listeners in some conditions. Moreover, we develop a masking-based method for denoising and compare it with the spectral mapping method. Evaluation results show that the masking-based method outperforms the mapping-based method.",
"title": ""
}
] | [
{
"docid": "75fcc3987407274148485394acf8856b",
"text": "Here we critically review studies that used electroencephalography (EEG) or event-related potential (ERP) indices as a biomarker of Alzheimer's disease. In the first part we overview studies that relied on visual inspection of EEG traces and spectral characteristics of EEG. Second, we survey analysis methods motivated by dynamical systems theory (DST) as well as more recent network connectivity approaches. In the third part we review studies of sleep. Next, we compare the utility of early and late ERP components in dementia research. In the section on mismatch negativity (MMN) studies we summarize their results and limitations and outline the emerging field of computational neurology. In the following we overview the use of EEG in the differential diagnosis of the most common neurocognitive disorders. Finally, we provide a summary of the state of the field and conclude that several promising EEG/ERP indices of synaptic neurotransmission are worth considering as potential biomarkers. Furthermore, we highlight some practical issues and discuss future challenges as well.",
"title": ""
},
{
"docid": "eb12e9e10d379fcbc156e94c3b447ce1",
"text": "Control-Flow Integrity (CFI) is an effective approach to mitigating control-flow hijacking attacks. Conventional CFI techniques statically extract a control-flow graph (CFG) from a program and instrument the program to enforce that CFG. The statically generated CFG includes all edges for all possible inputs; however, for a concrete input, the CFG may include many unnecessary edges.\n We present Per-Input Control-Flow Integrity (PICFI), which is a new CFI technique that can enforce a CFG computed for each concrete input. PICFI starts executing a program with the empty CFG and lets the program itself lazily add edges to the enforced CFG if such edges are required for the concrete input. The edge addition is performed by PICFI-inserted instrumentation code. To prevent attackers from arbitrarily adding edges, PICFI uses a statically computed all-input CFG to constrain what edges can be added at runtime. To minimize performance overhead, operations for adding edges are designed to be idempotent, so they can be patched to no-ops after their first execution. As our evaluation shows, PICFI provides better security than conventional fine-grained CFI with comparable performance overhead.",
"title": ""
},
{
"docid": "792cb4f62ad83e0ee0c94b60626103b9",
"text": "Microservices have become a popular pattern for deploying scale-out application logic and are used at companies like Netflix, IBM, and Google. An advantage of using microservices is their loose coupling, which leads to agile and rapid evolution, and continuous re-deployment. However, developers are tasked with managing this evolution and largely do so manually by continuously collecting and evaluating low-level service behaviors. This is tedious, error-prone, and slow. We argue for an approach based on service evolution modeling in which we combine static and dynamic information to generate an accurate representation of the evolving microservice-based system. We discuss how our approach can help engineers manage service upgrades, architectural evolution, and changing deployment trade-offs.",
"title": ""
},
{
"docid": "8f601e751650b56be81b069c42089640",
"text": "Inspired by the success of self attention mechanism and Transformer architecture in sequence transduction and image generation applications, we propose novel self attention-based architectures to improve the performance of adversarial latent codebased schemes in text generation. Adversarial latent code-based text generation has recently gained a lot of attention due to its promising results. In this paper, we take a step to fortify the architectures used in these setups, specifically AAE and ARAE. We benchmark two latent code-based methods (AAE and ARAE) designed based on adversarial setups. In our experiments, the Google sentence compression dataset is utilized to compare our method with these methods using various objective and subjective measures. The experiments demonstrate the proposed (self) attention-based models outperform the state-of-the-art in adversarial code-based text generation.",
"title": ""
},
{
"docid": "59e3e0099e215000b34e32d90b0bd650",
"text": "We present a method for learning discriminative filters using a shallow Convolutional Neural Network (CNN). We encode rotation invariance directly in the model by tying the weights of groups of filters to several rotated versions of the canonical filter in the group. These filters can be used to extract rotation invariant features well-suited for image classification. We test this learning procedure on a texture classification benchmark, where the orientations of the training images differ from those of the test images. We obtain results comparable to the state-of-the-art. Compared to standard shallow CNNs, the proposed method obtains higher classification performance while reducing by an order of magnitude the number of parameters to be learned.",
"title": ""
},
{
"docid": "a25041f4b95b68d2b8b9356d2f383b69",
"text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.",
"title": ""
},
{
"docid": "2e1a6dfb1208bc09a227c7e16ffc7b4f",
"text": "Cannabis sativa L. (Cannabaceae) is an important medicinal plant well known for its pharmacologic and therapeutic potency. Because of allogamous nature of this species, it is difficult to maintain its potency and efficacy if grown from the seeds. Therefore, chemical profile-based screening, selection of high yielding elite clones and their propagation using biotechnological tools is the most suitable way to maintain their genetic lines. In this regard, we report a simple and efficient method for the in vitro propagation of a screened and selected high yielding drug type variety of Cannabis sativa, MX-1 using synthetic seed technology. Axillary buds of Cannabis sativa isolated from aseptic multiple shoot cultures were successfully encapsulated in calcium alginate beads. The best gel complexation was achieved using 5 % sodium alginate with 50 mM CaCl2.2H2O. Regrowth and conversion after encapsulation was evaluated both under in vitro and in vivo conditions on different planting substrates. The addition of antimicrobial substance — Plant Preservative Mixture (PPM) had a positive effect on overall plantlet development. Encapsulated explants exhibited the best regrowth and conversion frequency on Murashige and Skoog medium supplemented with thidiazuron (TDZ 0.5 μM) and PPM (0.075 %) under in vitro conditions. Under in vivo conditions, 100 % conversion of encapsulated explants was obtained on 1:1 potting mix- fertilome with coco natural growth medium, moistened with full strength MS medium without TDZ, supplemented with 3 % sucrose and 0.5 % PPM. Plantlets regenerated from the encapsulated explants were hardened off and successfully transferred to the soil. These plants are selected to be used in mass cultivation for the production of biomass as a starting material for the isolation of THC as a bulk active pharmaceutical.",
"title": ""
},
{
"docid": "12ee85d0fa899e4e864bc1c30dedcd22",
"text": "An object-oriented simulation (OOS) consists of a set of objects that interact with each other over time. This paper provides a thorough introduction to OOS, addresses the important issue of composition versus inheritance, describes frames and frameworks for OOS, and presents an example of a network simulation language as an illustration of OOS.",
"title": ""
},
{
"docid": "5b9693b031e5fbea9afbc8c9f729829c",
"text": "Block coordinate descent (BCD) methods are widely-used for large-scale numerical optimization because of their cheap iteration costs, low memory requirements, amenability to parallelization, and ability to exploit problem structure. Three main algorithmic choices influence the performance of BCD methods: the block partitioning strategy, the block selection rule, and the block update rule. In this paper we explore all three of these building blocks and propose variations for each that can lead to significantly faster BCD methods. We (i) propose new greedy block-selection strategies that guarantee more progress per iteration than the Gauss-Southwell rule; (ii) explore practical issues like how to implement the new rules when using “variable” blocks; (iii) explore the use of message-passing to compute matrix or Newton updates efficiently on huge blocks for problems with a sparse dependency between variables; and (iv) consider optimal active manifold identification, which leads to bounds on the “active-set complexity” of BCD methods and leads to superlinear convergence for certain problems with sparse solutions (and in some cases finite termination at an optimal solution). We support all of our findings with numerical results for the classic machine learning problems of least squares, logistic regression, multi-class logistic regression, label propagation, and L1-regularization.",
"title": ""
},
{
"docid": "5598e6e1541e84924a56d3ac874dd19f",
"text": "Online dating sites have become popular platforms for people to look for potential romantic partners. It is important to understand users' dating preferences in order to make better recommendations on potential dates. The message sending and replying actions of a user are strong indicators for what he/she is looking for in a potential date and reflect the user's actual dating preferences. We study how users' online dating behaviors correlate with various user attributes using a real-world dateset from a major online dating site in China. Our study provides a firsthand account of the user online dating behaviors in China, a country with a large population and unique culture. The results can provide valuable guidelines to the design of recommendation engine for potential dates.",
"title": ""
},
{
"docid": "4e002bc3c0a42869c5c9eb4911c67ccf",
"text": "Today's massively-sized datasets have made it necessary to often perform computations on them in a distributed manner. In principle, a computational task is divided into subtasks which are distributed over a cluster operated by a taskmaster. One issue faced in practice is the delay incurred due to the presence of slow machines, known as stragglers. Several schemes, including those based on replication, have been proposed in the literature to mitigate the effects of stragglers and more recently, those inspired by coding theory have begun to gain traction. In this work, we consider a distributed gradient descent setting suitable for a wide class of machine learning problems. We adopt the framework of Tandon et al. [1] and present a deterministic scheme that, for a prescribed per-machine computational effort, recovers the gradient from the least number of machines $f$ theoretically permissible, via an $O(f^{2})$ decoding algorithm. The idea is based on a suitably designed Reed-Solomon code that has a sparsest and balanced generator matrix. We also provide a theoretical delay model which can be used to minimize the expected waiting time per computation by optimally choosing the parameters of the scheme. Finally, we supplement our theoretical findings with numerical results that demonstrate the efficacy of the method and its advantages over competing schemes.",
"title": ""
},
{
"docid": "3ba011d181a4644c8667b139c63f50ff",
"text": "Recent studies have suggested that positron emission tomography (PET) imaging with 68Ga-labelled DOTA-somatostatin analogues (SST) like octreotide and octreotate is useful in diagnosing neuroendocrine tumours (NETs) and has superior value over both CT and planar and single photon emission computed tomography (SPECT) somatostatin receptor scintigraphy (SRS). The aim of the present study was to evaluate the role of 68Ga-DOTA-1-NaI3-octreotide (68Ga-DOTANOC) in patients with SST receptor-expressing tumours and to compare the results of 68Ga-DOTA-D-Phe1-Tyr3-octreotate (68Ga-DOTATATE) in the same patient population. Twenty SRS were included in the study. Patients’ age (n = 20) ranged from 25 to 75 years (mean 55.4 ± 12.7 years). There were eight patients with well-differentiated neuroendocrine tumour (WDNET) grade1, eight patients with WDNET grade 2, one patient with poorly differentiated neuroendocrine carcinoma (PDNEC) grade 3 and one patient with mixed adenoneuroendocrine tumour (MANEC). All patients had two consecutive PET studies with 68Ga-DOTATATE and 68Ga-DOTANOC. All images were evaluated visually and maximum standardized uptake values (SUVmax) were also calculated for quantitative evaluation. On visual evaluation both tracers produced equally excellent image quality and similar body distribution. The physiological uptake sites of pituitary and salivary glands showed higher uptake in 68Ga-DOTATATE images. Liver and spleen uptake values were evaluated as equal. Both 68Ga-DOTATATE and 68Ga-DOTANOC were negative in 6 (30 %) patients and positive in 14 (70 %) patients. In 68Ga-DOTANOC images only 116 of 130 (89 %) lesions could be defined and 14 lesions were missed because of lack of any uptake. SUVmax values of lesions were significantly higher on 68Ga-DOTATATE images. Our study demonstrated that the images obtained by 68Ga-DOTATATE and 68Ga-DOTANOC have comparable diagnostic accuracy. However, 68Ga-DOTATATE seems to have a higher lesion uptake and may have a potential advantage.",
"title": ""
},
{
"docid": "38ca6f23f3910eac7085940240a92b03",
"text": "Region growing and edge detection are two popular and common techniques used for image segmentation. Region growing is preferred over edge detection methods because it is more robust against low contrast problems and effectively addresses the connectivity issues faced by edge detectors. Edgebased techniques, on the other hand, can significantly reduce useless information while preserving the important structural properties in an image. Recent studies have shown that combining region growing and edge methods for segmentation will produce much better results. This paper proposed using edge information to automatically select seed pixels and guide the process of region growing in segmenting geometric objects from an image. The geometric objects are songket motifs from songket patterns. Songket motifs are the main elements that decorate songket pattern. The beauty of songket lies in the elaborate design of the patterns and combination of motifs that are intricately woven on the cloth. After experimenting on thirty songket pattern images, the proposed method achieved promising extraction of the songket motifs.",
"title": ""
},
{
"docid": "842cd58edd776420db869e858be07de4",
"text": "A nationwide interoperable public safety wireless broadband network is being planned by the First Responder Network Authority (FirstNet) under the auspices of the United States government. The public safety network shall provide the needed wireless coverage in the wake of an incident or a disaster. This paper proposes a drone-assisted multi-hop device-to-device (D2D) communication scheme as a means to extend the network coverage over regions where it is difficult to deploy a landbased relay. The resource are shared using either time division or frequency division scheme. Efficient algorithms are developed to compute the optimal position of the drone for maximizing the data rate, which are shown to be highly effective via simulations.",
"title": ""
},
{
"docid": "1ca8130cf3f0f1788196bd4bc4ec45a0",
"text": "PURPOSE\nTo examine the feasibility and preliminary benefits of an integrative cognitive behavioral therapy (CBT) with adolescents with inflammatory bowel disease and anxiety.\n\n\nDESIGN AND METHODS\nNine adolescents participated in a CBT program at their gastroenterologist's office. Structured diagnostic interviews, self-report measures of anxiety and pain, and physician-rated disease severity were collected pretreatment and post-treatment.\n\n\nRESULTS\nPostintervention, 88% of adolescents were treatment responders, and 50% no longer met criteria for their principal anxiety disorder. Decreases were demonstrated in anxiety, pain, and disease severity.\n\n\nPRACTICE IMPLICATIONS\nAnxiety screening and a mental health referral to professionals familiar with medical management issues is important.",
"title": ""
},
{
"docid": "aa93e26585f7220c3d528328e5d35080",
"text": "Sexual orientation is one of the largest sex differences in humans. The vast majority of the population is heterosexual, that is, they are attracted to members of the opposite sex. However, a small but significant proportion of people are bisexual or homosexual and experience attraction to members of the same sex. The origins of the phenomenon have long been the subject of scientific study. In this chapter, we will review the evidence that sexual orientation has biological underpinnings and consider the involvement of epigenetic mechanisms. We will first discuss studies that show that sexual orientation has a genetic component. These studies show that sexual orientation is more concordant in monozygotic twins than in dizygotic ones and that male sexual orientation is linked to several regions of the genome. We will then highlight findings that suggest a link between sexual orientation and epigenetic mechanisms. In particular, we will consider the case of women with congenital adrenal hyperplasia (CAH). These women were exposed to high levels of testosterone in utero and have much higher rates of nonheterosexual orientation compared to non-CAH women. Studies in animal models strongly suggest that the long-term effects of hormonal exposure (such as those experienced by CAH women) are mediated by epigenetic mechanisms. We conclude by describing a hypothetical framework that unifies genetic and epigenetic explanations of sexual orientation and the continued challenges facing sexual orientation research.",
"title": ""
},
{
"docid": "61f079cb59505d9bf1de914330dd852e",
"text": "Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9 percent. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, tokensequence, SBPH, and Markovian ddiscriminators. The results deomonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. MIT Spam Conference 2004 This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole or in part without payment of fee is granted for nonprofit educational and research purposes provided that all such whole or partial copies include the following: a notice that such copying is by permission of Mitsubishi Electric Research Laboratories, Inc.; an acknowledgment of the authors and individual contributions to the work; and all applicable portions of the copyright notice. Copying, reproduction, or republishing for any other purpose shall require a license with payment of fee to Mitsubishi Electric Research Laboratories, Inc. All rights reserved. Copyright c © Mitsubishi Electric Research Laboratories, Inc., 2004 201 Broadway, Cambridge, Massachusetts 02139 The Spam-Filtering Accuracy Plateau at 99.9% Accuracy and How to Get Past It. William S. Yerazunis, PhD* Presented at the 2004 MIT Spam Conference January 18, 2004 MIT, Cambridge, Massachusetts Abstract: Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed. Bayesian filters have now become the standard for spam filtering; unfortunately most Bayesian filters seem to reach a plateau of accuracy at 99.9%. We experimentally compare the training methods TEFT, TOE, and TUNE, as well as pure Bayesian, token-bag, token-sequence, SBPH, and Markovian discriminators. The results demonstrate that TUNE is indeed best for training, but computationally exorbitant, and that Markovian discrimination is considerably more accurate than Bayesian, but not sufficient to reach four-nines accuracy, and that other techniques such as inoculation are needed.",
"title": ""
},
{
"docid": "05fae4c840b1ee242a16a9db5eee4fb5",
"text": "Hardware technologies for trusted computing, or trusted execution environments (TEEs), have rapidly matured over the last decade. In fact, TEEs are at the brink of widespread commoditization with the recent introduction of Intel Software Guard Extensions (Intel SGX). Despite such rapid development of TEE, software technologies for TEE significantly lag behind their hardware counterpart, and currently only a select group of researchers have the privilege of accessing this technology. To address this problem, we develop an open source platform, called OpenSGX, that emulates Intel SGX hardware components at the instruction level and provides new system software components necessarily required for full TEE exploration. We expect that the OpenSGX framework can serve as an open platform for SGX research, with the following contributions. First, we develop a fully functional, instruction-compatible emulator of Intel SGX for enabling the exploration of software/hardware design space, and development of enclave programs. OpenSGX provides a platform for SGX development, meaning that it provides not just emulation but also operating system components, an enclave program loader/packager, an OpenSGX user library, debugging, and performance monitoring. Second, to show OpenSGX’s use cases, we applied OpenSGX to protect sensitive information (e.g., directory) of Tor nodes and evaluated their potential performance impacts. Therefore, we believe OpenSGX has great potential for broader communities to spark new research on soon-to-becommodity Intel SGX.",
"title": ""
},
{
"docid": "272be5fede7ede10ebfd368cabcd437b",
"text": "Penetration testing is widely used to help ensure the security of web applications. Using penetration testing, testers discover vulnerabilities by simulating attacks on a target web application. To do this efficiently, testers rely on automated techniques that gather input vector information about the target web application and analyze the application’s responses to determine whether an attack was successful. Techniques for performing these steps are often incomplete, which can leave parts of the web application untested and vulnerabilities undiscovered. This paper proposes a new approach to penetration testing that addresses the limitations of current techniques. The approach incorporates two recently developed analysis techniques to improve input vector identification and detect when attacks have been successful against a web application. This paper compares the proposed approach against two popular penetration testing tools for a suite of web applications with known and unknown vulnerabilities. The evaluation results show that the proposed approach performs a more thorough penetration testing and leads to the discovery of more vulnerabilities than both the tools. Copyright q 2011 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "4d12a470a2f678142091dd5232050235",
"text": "Learning a deep model from small data is yet an opening and challenging problem. We focus on one-shot classification by deep learning approach based on a small quantity of training samples. We proposed a novel deep learning approach named Local Contrast Learning (LCL) based on the key insight about a human cognitive behavior that human recognizes the objects in a specific context by contrasting the objects in the context or in her/his memory. LCL is used to train a deep model that can contrast the recognizing sample with a couple of contrastive samples randomly drawn and shuffled. On one-shot classification task on Omniglot, the deep model based LCL with 122 layers and 1.94 millions of parameters, which was trained on a tiny dataset with only 60 classes and 20 samples per class, achieved the accuracy 97.99% that outperforms human and state-of-the-art established by Bayesian Program Learning (BPL) trained on 964 classes. LCL is a fundamental idea which can be applied to alleviate parametric model’s overfitting resulted by lack of training samples.",
"title": ""
}
] | scidocsrr |
954f48f92867dbcdd21db815f84eef07 | Origami Robot: A Self-Folding Paper Robot With an Electrothermal Actuator Created by Printing | [
{
"docid": "f641e0da7b9aaffe0fabd1a6b60a6c52",
"text": "This paper introduces a low cost, fast and accessible technology to support the rapid prototyping of functional electronic devices. Central to this approach of 'instant inkjet circuits' is the ability to print highly conductive traces and patterns onto flexible substrates such as paper and plastic films cheaply and quickly. In addition to providing an alternative to breadboarding and conventional printed circuits, we demonstrate how this technique readily supports large area sensors and high frequency applications such as antennas. Unlike existing methods for printing conductive patterns, conductivity emerges within a few seconds without the need for special equipment. We demonstrate that this technique is feasible using commodity inkjet printers and commercially available ink, for an initial investment of around US$300. Having presented this exciting new technology, we explain the tools and techniques we have found useful for the first time. Our main research contribution is to characterize the performance of instant inkjet circuits and illustrate a range of possibilities that are enabled by way of several example applications which we have built. We believe that this technology will be of immediate appeal to researchers in the ubiquitous computing domain, since it supports the fabrication of a variety of functional electronic device prototypes.",
"title": ""
}
] | [
{
"docid": "5e261764696ebfb02196b0f9a6b7a4a6",
"text": "When the cost of misclassifying a sample is high, it is useful to have an accurate estimate of uncertainty in the prediction for that sample. There are also multiple types of uncertainty which are best estimated in different ways, for example, uncertainty that is intrinsic to the training set may be well-handled by a Bayesian approach, while uncertainty introduced by shifts between training and query distributions may be better-addressed by density/support estimation. In this paper, we examine three types of uncertainty: model capacity uncertainty, intrinsic data uncertainty, and open set uncertainty, and review techniques that have been derived to address each one. We then introduce a unified hierarchical model, which combines methods from Bayesian inference, invertible latent density inference, and discriminative classification in a single end-to-end deep neural network topology to yield efficient per-sample uncertainty estimation. Our approach addresses all three uncertainty types and readily accommodates prior/base rates for binary detection.",
"title": ""
},
{
"docid": "5029feaec44e80561efef4b97c435896",
"text": "Conceptual blending has been proposed as a creative cognitive process, but most theories focus on the analysis of existing blends rather than mechanisms for the efficient construction of novel blends. While conceptual blending is a powerful model for creativity, there are many challenges related to the computational application of blending. Inspired by recent theoretical research, we argue that contexts and context-induced goals provide insights into algorithm design for creative systems using conceptual blending. We present two case studies of creative systems that use goals and contexts to efficiently produce novel, creative artifacts in the domains of story generation and virtual characters engaged in pretend play respectively.",
"title": ""
},
{
"docid": "1d78bd02fbf7be1bac964ff934c766de",
"text": "Recently, some publications indicated that the generative modeling approaches, i.e., topic models, achieved appreciated performance on multi-label classification, especially for skewed data sets. In this paper, we develop two supervised topic models for multi-label classification problems. The two models, i.e., Frequency-LDA (FLDA) and Dependency-Frequency-LDA (DFLDA), extend Latent Dirichlet Allocation (LDA) via two observations, i.e., the frequencies of the labels and the dependencies among different labels. We train the models by the Gibbs sampler algorithm. The experiment results on well known collections demonstrate that our two models outperform the state-of-the-art approaches. & 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3553d1dc8272bf0366b2688e5107aa3f",
"text": "The emergence of the Web 2.0 technology generated a massive amount of raw data by enabling Internet users to post their opinions, reviews, comments on the web. Processing this raw data to extract useful information can be a very challenging task. An example of important information that can be automatically extracted from the users' posts and comments is their opinions on different issues, events, services, products, etc. This problem of Sentiment Analysis (SA) has been studied well on the English language and two main approaches have been devised: corpus-based and lexicon-based. This paper addresses both approaches to SA for the Arabic language. Since there is a limited number of publically available Arabic dataset and Arabic lexicons for SA, this paper starts by building a manually annotated dataset and then takes the reader through the detailed steps of building the lexicon. Experiments are conducted throughout the different stages of this process to observe the improvements gained on the accuracy of the system and compare them to corpus-based approach.",
"title": ""
},
{
"docid": "74290ff01b32423087ce0025625dc445",
"text": "niques is now the world champion computer program in the game of Contract Bridge. As reported in The New York Times and The Washington Post, this program—a new version of Great Game Products’ BRIDGE BARON program—won the Baron Barclay World Bridge Computer Challenge, an international competition hosted in July 1997 by the American Contract Bridge League. It is well known that the game tree search techniques used in computer programs for games such as Chess and Checkers work differently from how humans think about such games. In contrast, our new version of the BRIDGE BARON emulates the way in which a human might plan declarer play in Bridge by using an adaptation of hierarchical task network planning. This article gives an overview of the planning techniques that we have incorporated into the BRIDGE BARON and discusses what the program’s victory signifies for research on AI planning and game playing.",
"title": ""
},
{
"docid": "df833f98f7309a5ab5f79fae2f669460",
"text": "Model-free reinforcement learning (RL) has become a promising technique for designing a robust dynamic power management (DPM) framework that can cope with variations and uncertainties that emanate from hardware and application characteristics. Moreover, the potentially significant benefit of performing application-level scheduling as part of the system-level power management should be harnessed. This paper presents an architecture for hierarchical DPM in an embedded system composed of a processor chip and connected I/O devices (which are called system components.) The goal is to facilitate saving in the system component power consumption, which tends to dominate the total power consumption. The proposed (online) adaptive DPM technique consists of two layers: an RL-based component-level local power manager (LPM) and a system-level global power manager (GPM). The LPM performs component power and latency optimization. It employs temporal difference learning on semi-Markov decision process (SMDP) for model-free RL, and it is specifically optimized for an environment in which multiple (heterogeneous) types of applications can run in the embedded system. The GPM interacts with the CPU scheduler to perform effective application-level scheduling, thereby, enabling the LPM to do even more component power optimizations. In this hierarchical DPM framework, power and latency tradeoffs of each type of application can be precisely controlled based on a user-defined parameter. Experiments show that the amount of average power saving is up to 31.1% compared to existing approaches.",
"title": ""
},
{
"docid": "62b7f53dc399b347b6a4a453d7bd1fa2",
"text": "Sign language is important for facilitating communication between hearing impaired and the rest of society. Two approaches have traditionally been used in the literature: image-based and sensor-based systems. Sensor-based systems require the user to wear electronic gloves while performing the signs. The glove includes a number of sensors detecting different hand and finger articulations. Image-based systems use camera(s) to acquire a sequence of images of the hand. Each of the two approaches has its own disadvantages. The sensor-based method is not natural as the user must wear a cumbersome instrument while the imagebased system requires specific background and environmental conditions to achieve high accuracy. In this paper, we propose a new approach for Arabic Sign Language Recognition (ArSLR) which involves the use of the recently introduced Leap Motion Controller (LMC). This device detects and tracks the hand and fingers to provide position and motion information. We propose to use the LMC as a backbone of the ArSLR system. In addition to data acquisition, the system includes a preprocessing stage, a feature extraction stage, and a classification stage. We compare the performance of Multilayer Perceptron (MLP) neural networks with the Nave Bayes classifier. Using the proposed system on the Arabic sign alphabets gives 98% classification accuracy with the Nave Bayes classifier and more than 99% using the MLP.",
"title": ""
},
{
"docid": "8a564e77710c118e4de86be643b061a6",
"text": "SOAR is a cognitive architecture named from state, operator and result, which is adopted to portray the drivers’ guidance compliance behavior on variable message sign VMS in this paper. VMS represents traffic conditions to drivers by three colors: red, yellow, and green. Based on the multiagent platform, SOAR is introduced to design the agent with the detailed description of the working memory, long-term memory, decision cycle, and learning mechanism. With the fixed decision cycle, agent transforms state through four kinds of operators, including choosing route directly, changing the driving goal, changing the temper of driver, and changing the road condition of prediction. The agent learns from the process of state transformation by chunking and reinforcement learning. Finally, computerized simulation program is used to study the guidance compliance behavior. Experiments are simulated many times under given simulation network and conditions. The result, including the comparison between guidance and no guidance, the state transition times, and average chunking times are analyzed to further study the laws of guidance compliance and learning mechanism.",
"title": ""
},
{
"docid": "6f0d9f383c0142b43ea440e6efb2a59a",
"text": "OBJECTIVES\nTo evaluate the effect of a probiotic product in acute self-limiting gastroenteritis in dogs.\n\n\nMETHODS\nThirty-six dogs suffering from acute diarrhoea or acute diarrhoea and vomiting were included in the study. The trial was performed as a randomised, double blind and single centre study with stratified parallel group design. The animals were allocated to equal looking probiotic or placebo treatment by block randomisation with a fixed block size of six. The probiotic cocktail consisted of thermo-stabilised Lactobacillus acidophilus and live strains of Pediococcus acidilactici, Bacillus subtilis, Bacillus licheniformis and Lactobacillus farciminis.\n\n\nRESULTS\nThe time from initiation of treatment to the last abnormal stools was found to be significantly shorter (P = 0.04) in the probiotic group compared to placebo group, the mean time was 1.3 days and 2.2 days, respectively. The two groups were found nearly equal with regard to time from start of treatment to the last vomiting episode.\n\n\nCLINICAL SIGNIFICANCE\nThe probiotic tested may reduce the convalescence time in acute self-limiting diarrhoea in dogs.",
"title": ""
},
{
"docid": "1151348144ad2915f63f6b437e777452",
"text": "Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new smartphone accelerometer dataset designed for activity recognition. The dataset includes 11,771 activities performed by 30 subjects of ages ranging from 18 to 60 years. Activities are divided in 17 fine grained classes grouped in two coarse grained classes: 9 types of activities of daily living (ADL) and 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with two different classifiers and with different configurations. The best results are achieved with k-NN classifying ADLs only, considering personalization, and with both windows of 51 and 151 samples.",
"title": ""
},
{
"docid": "e7bd18d1c1aa3ef51114dbab9587bb5b",
"text": "Protein phase separation is implicated in formation of membraneless organelles, signaling puncta and the nuclear pore. Multivalent interactions of modular binding domains and their target motifs can drive phase separation. However, forces promoting the more common phase separation of intrinsically disordered regions are less understood, with suggested roles for multivalent cation-pi, pi-pi, and charge interactions and the hydrophobic effect. Known phase-separating proteins are enriched in pi-orbital containing residues and thus we analyzed pi-interactions in folded proteins. We found that pi-pi interactions involving non-aromatic groups are widespread, underestimated by force-fields used in structure calculations and correlated with solvation and lack of regular secondary structure, properties associated with disordered regions. We present a phase separation predictive algorithm based on pi interaction frequency, highlighting proteins involved in biomaterials and RNA processing.",
"title": ""
},
{
"docid": "9adfb1b69d1521d148db41618a449e7b",
"text": "This article presents a novel parallel spherical mechanism called Argos with three rotational degrees of freedom. Design aspects of the first prototype built of the Argos mechanism are discussed. The direct kinematic problem is solved, leading always to four nonsingular configurations of the end effector for a given set of joint angles. The inverse-kinematic problem yields two possible configurations for each of the three pantographs for a given orientation of the end effector. Potential applications of the Argos mechanism are robot wrists, orientable machine tool beds, joy sticks, surgical manipulators, and orientable units for optical components. Another pantograph based new structure named PantoScope having two rotational DoF is also briefly introduced. KEY WORDS—parallel robot, machine tool, 3 degree of freedom (DoF) wrist, pure orientation, direct kinematics, inverse kinematics, Pantograph based, Argos, PantoScope",
"title": ""
},
{
"docid": "a64f8a3a75dd719b955aa827d8c33472",
"text": "ÐWhile empirical studies in software engineering are beginning to gain recognition in the research community, this subarea is also entering a new level of maturity by beginning to address the human aspects of software development. This added focus has added a new layer of complexity to an already challenging area of research. Along with new research questions, new research methods are needed to study nontechnical aspects of software engineering. In many other disciplines, qualitative research methods have been developed and are commonly used to handle the complexity of issues involving human behavior. This paper presents several qualitative methods for data collection and analysis and describes them in terms of how they might be incorporated into empirical studies of software engineering, in particular how they might be combined with quantitative methods. To illustrate this use of qualitative methods, examples from real software engineering studies are used throughout. Index TermsÐQualitative methods, data collection, data analysis, experimental design, empirical software engineering, participant observation, interviewing.",
"title": ""
},
{
"docid": "1b5a8f920a2f3380f311c53bdeb740c8",
"text": "5 Objectivity in parentheses 7 5.0 Illusion and Perception: the traditional approach . . . . . . . . . . . . . . . . . . . . . 7 5.1 An Invitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Objectivity in parentheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.3 The Universum versus the Multiversa . . . . . . . . . . . . . . . . . . . . . . . . . . . 8",
"title": ""
},
{
"docid": "ec8684e227bf63ac2314ce3cb17e2e8b",
"text": "Musical genre classification is the automatic classification of audio signals into user defined labels describing pieces of music. A problem inherent to genre classification experiments in music information retrieval research is the use of songs from the same artist in both training and test sets. We show that this does not only lead to overoptimistic accuracy results but also selectively favours particular classification approaches. The advantage of using models of songs rather than models of genres vanishes when applying an artist filter. The same holds true for the use of spectral features versus fluctuation patterns for preprocessing of the audio files.",
"title": ""
},
{
"docid": "af1dab317f2a5b45593a89d96a8061de",
"text": "Software engineering is forecast to be among the fastest growing employment field in the next decades. The purpose of this investigation is two-fold: Firstly, empirical studies on the personality types of software professionals are reviewed. Secondly, this work provides an upto-date personality profile of software engineers according to the Myers–Briggs Type Indicator. r 2002 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "9a3edba4b95b444243e34675ab2a7b85",
"text": "Values are presented for body constants based on a study of nine male white cadavers of normal appearance and average build. The limb data are supplemented by a further analysis of 11 upper and 41 lower limbs. Techniques used in the study form standard procedures that can be duplicated by subsequent workers. Each cadaver was measured, weighed, and somatotyped. Joints were placed in the midposition of the movement range and the body was frozen rigid. Joint angles were bisected in a systematic dismemberment procedure to produce unit segments. These segment lengths were weighed, measured for linear link dimensions, and analysed for segment volumes. The segment centers of mass were located relative to link end points as well as in relation to anatomical landmarks. Finally, each segment was dissected into its component parts and these were weighed. The specific gravity of each body part was calculated separately. Data are expressed in mean values together with standard deviations and, where available, are correlated and evaluated with other values in the literature. Data on the relative bulk of body segments have been scarce. Until recently, the only users of information dealing with the mass and proportion of the human figure have been sculptors and graphic artists. These people usually met their needs through canons of proportions and a trained perception rather than by actual measurement. There are no substitutes though for good empirical data when critical work on body mechanics or accurate analyses of human locomotion are attempted. During the past decade or so, the need for such information has been recognized specifically in designing prosthetic and orthotic devices for the limbs of handicapped persons, for sports analysis, for the construction of test dummies, such as those subjected to vehicular crashes, and for studies on the dynamics of body impacts in crashes and falls. The fundamental nature of data on the mass and dimensions of the body parts cannot be questioned. It is odd that even now there is such a dearth of information. The research literature up to the present contains usable body segment measurements from only 12 (or possibly 14) unpreserved and dismembered cadavers, all adult white males. A tabulation of data in an Air Force technical report (Dempster, '55a), dealing with seven specimens caAM. J. ANAT., 120: 33-54. daver by cadaver, was the first amplification of the scanty records in more than two generations. The tables on Michigan cadavers were reprinted by Krogman and Johnston ( '63) in an abridgment of the original report; Williams and Lisner ( '62) presented their own simplifications based on the same study; Barter ('57), Duggar ( '62) and Contini, Drillis, and Bluestein ( '63) have made tallys of data from the original tabulations along with parts of the older data. None of these studies gave any attention to the procedural distinctions between workers who had procured original data; one even grouped volumes and masses indiscriminately as masses. The Michigan data, however, have not been summarized nor evaluated up to this time. Since the procedures and, especially, the limiting conditions incidental to the gathering of body-segment data, have not been commented on critically since Braune and Fischer (1889), a comprehensive discussion of the entire problem at this point should help further work in this important area. 1 Supported in part by research grants from the Public Health Service National Institutes of Health (GM-07741-06). and from the office of Vocational Rehabilitation (RD-216 60-C), wlth support a dozen vears earlier' from a research contract with the Anthropometric Unit of the Wright Air Development Center Wright-Patterson Air Force Base, Dayton, Ohio (AF 18 (600)-43 Project no. 7414).",
"title": ""
},
{
"docid": "a7bf370e83bd37ed4f83c3846cfaaf97",
"text": "This paper presents the design and implementation of an evanescent tunable combline filter based on electronic tuning with the use of RF-MEMS capacitor banks. The use of MEMS tuning circuit results in the compact implementation of the proposed filter with high-Q and near to zero DC power consumption. The proposed filter consist of combline resonators with tuning disks that are loaded with RF-MEMS capacitor banks. A two-pole filter is designed and measured based on the proposed tuning concept. The filter operates at 2.5 GHz with a bandwidth of 22 MHz. Measurement results demonstrate a tuning range of 110 MHz while the quality factor is above 374 (1300–374 over the tuning range).",
"title": ""
},
{
"docid": "904c8b4be916745c7d1f0777c2ae1062",
"text": "In this paper, we address the problem of continuous access control enforcement in dynamic data stream environments, where both data and query security restrictions may potentially change in real-time. We present FENCE framework that ffectively addresses this problem. The distinguishing characteristics of FENCE include: (1) the stream-centric approach to security, (2) the symmetric model for security settings of both continuous queries and streaming data, and (3) two alternative security-aware query processing approaches that can optimize query execution based on regular and security-related selectivities. In FENCE, both data and query security restrictions are modeled symmetrically in the form of security metadata, called \"security punctuations\" embedded inside data streams. We distinguish between two types of security punctuations, namely, the data security punctuations (or short, dsps) which represent the access control policies of the streaming data, and the query security punctuations (or short, qsps) which describe the access authorizations of the continuous queries. We also present our encoding method to support XACML(eXtensible Access Control Markup Language) standard. We have implemented FENCE in a prototype DSMS and present our performance evaluation. The results of our experimental study show that FENCE's approach has low overhead and can give great performance benefits compared to the alternative security solutions for streaming environments.",
"title": ""
}
] | scidocsrr |
71c48aa46500ce1636999a2fd0180dab | Multi-Sentence Compression: Finding Shortest Paths in Word Graphs | [
{
"docid": "fc164dc2d55cec2867a99436d37962a1",
"text": "We address the text-to-text generation problem of sentence-level paraphrasing — a phenomenon distinct from and more difficult than wordor phrase-level paraphrasing. Our approach applies multiple-sequence alignment to sentences gathered from unannotated comparable corpora: it learns a set of paraphrasing patterns represented by word lattice pairs and automatically determines how to apply these patterns to rewrite new sentences. The results of our evaluation experiments show that the system derives accurate paraphrases, outperforming baseline systems.",
"title": ""
}
] | [
{
"docid": "f041a02b565ca9100d20b479fb6951c8",
"text": "Linear blending is a very popular skinning technique for virtual characters, even though it does not always generate realistic deformations. Recently, nonlinear blending techniques (such as dual quaternions) have been proposed in order to improve upon the deformation quality of linear skinning. The trade-off consists of the increased vertex deformation time and the necessity to redesign parts of the 3D engine. In this paper, we demonstrate that any nonlinear skinning technique can be approximated to an arbitrary degree of accuracy by linear skinning, using just a few samples of the nonlinear blending function (virtual bones). We propose an algorithm to compute this linear approximation in an automatic fashion, requiring little or no interaction with the user. This enables us to retain linear skinning at the core of our 3D engine without compromising the visual quality or character setup costs.",
"title": ""
},
{
"docid": "da74e402f4542b6cbfb27f04c7640eb4",
"text": "Hand-built verb clusters such as the widely used Levin classes (Levin, 1993) have proved useful, but have limited coverage. Verb classes automatically induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other hand, can give clusters with much larger coverage, and can be adapted to specific corpora such as Twitter. We present a method for clustering the outputs of VerbKB: verbs with their multiple argument types, e.g.“marry(person, person)”, “feel(person, emotion).” We make use of a novel lowdimensional embedding of verbs and their arguments to produce high quality clusters in which the same verb can be in different clusters depending on its argument type. The resulting verb clusters do a better job than hand-built clusters of predicting sarcasm, sentiment, and locus of control in tweets.",
"title": ""
},
{
"docid": "3d4633e9c26d46fb7ef1e5865835bde5",
"text": "A multiple input, multiple output (MIMO) radar emits probings signals with multiple transmit antennas and records the reflections from targets with multiple receive antennas. Estimating the relative angles, delays, and Doppler shifts from the received signals allows to determine the locations and velocities of the targets. Standard approaches to MIMO radar based on digital matched filtering or compressed sensing only resolve the angle-delay-Doppler triplets on a (1/(NTNR), 1/B, 1/T ) grid, where NT and NR are the number of transmit and receive antennas, B is the bandwidth of the probing signals, and T is the length of the time interval over which the reflections are observed. In this work, we show that the continuous angle-delay-Doppler triplets and the corresponding attenuation factors can be recovered perfectly by solving a convex optimization problem. This result holds provided that the angle-delay-Doppler triplets are separated either by 10/(NTNR - 1) in angle, 10.01/B in delay, or 10.01/T in Doppler direction. Furthermore, this result is optimal (up to log factors) in the number of angle-delay-Doppler triplets that can be recovered.",
"title": ""
},
{
"docid": "350cda71dae32245b45d96b5fdd37731",
"text": "In this work, we focus on cyclic codes over the ring F2+uF2+vF2+uvF2, which is not a finite chain ring. We use ideas from group rings and works of AbuAlrub et al. in (Des Codes Crypt 42:273–287, 2007) to characterize the ring (F2 + uF2 + vF2 + uvF2)/(x − 1) and cyclic codes of odd length. Some good binary codes are obtained as the images of cyclic codes over F2+uF2+vF2+uvF2 under two Gray maps that are defined. We also characterize the binary images of cyclic codes over F2 + uF2 + vF2 + uvF2 in general.",
"title": ""
},
{
"docid": "f918ca37dcf40512c4efa013567a126b",
"text": "In the field of robots' obstacle avoidance and navigation, indirect contact sensors such as visual, ultrasonic and infrared detection are widely used. However, the performance of these sensors is always influenced by the severe environment, especially under the dark, dense fog, underwater conditions. The obstacle avoidance robot based on tactile sensor is proposed in this paper to realize the autonomous obstacle avoidance navigation by only using three dimensions force sensor. In addition, the mathematical model and algorithm are optimized to make up the deficiency of tactile sensor. Finally, the feasibility and reliability of this study are verified by the simulation results.",
"title": ""
},
{
"docid": "40d4bd1bc3876a772cfbb2ed5b17052d",
"text": "Adaptive cruise control is one of the most widely used vehicle driver assistance systems. However, uncertainty about drivers' lane change maneuvers in surrounding vehicles, such as unexpected cut-in, remains a challenge. We propose a novel adaptive cruise control framework combining convolution neural network (CNN)-based lane-change-intention inference and a predictive controller. We transform real-world driving data, collected on public roads with only standard production sensors, to a simplified bird's-eye view. This enables a CNN-based inference approach with low computational cost and robustness to noisy input. The predicted inference of traffic participants' lane change intention is utilized to improve safety and ride comfort with model predictive control. Simulation results based on driving scene reconstruction demonstrate the superior performance of inference using the proposed CNN-based approach, as well as enhanced safety and ride comfort.",
"title": ""
},
{
"docid": "9ed2f6172271c6ccdba2ab16e2d6b3d6",
"text": "An important problem in analyzing big data is subspace clustering, i.e., to represent a collection of points in a high-dimensional space via the union of low-dimensional subspaces. Sparse Subspace Clustering (SSC) and LowRank Representation (LRR) are the state-of-the-art methods for this task. These two methods are fundamentally similar in that both are based on convex optimization exploiting the intuition of “Self-Expressiveness”. The main difference is that SSC minimizes the vector `1 norm of the representation matrix to induce sparsity while LRR minimizes the nuclear norm (aka trace norm) to promote a low-rank structure. Because the representation matrix is often simultaneously sparse and low-rank, we propose a new algorithm, termed Low-Rank Sparse Subspace Clustering (LRSSC), by combining SSC and LRR, and develop theoretical guarantees of the success of the algorithm. The results reveal interesting insights into the strengths and weaknesses of SSC and LRR, and demonstrate how LRSSC can take advantage of both methods in preserving the “Self-Expressiveness Property” and “Graph Connectivity” at the same time. A byproduct of our analysis is that it also expands the theoretical guarantee of SSC to handle cases when the subspaces have arbitrarily small canonical angles but are “nearly independent”.",
"title": ""
},
{
"docid": "5d85e552841fe415daa72dff2a5f9706",
"text": "M any security faculty members and practitioners bemoan the lack of good books in the field. Those of us who teach often find ourselves forced to rely on collections of papers to fortify our courses. In the last few years, however, we've started to see the appearance of some high-quality books to support our endeavors. Matt Bishop's book—Com-puter Security: Art and Science—is definitely hefty and packed with lots of information. It's a large book (with more than 1,000 pages), and it covers most any computer security topic that might be of interest. section discusses basic security issues at the definitional level. The Policy section addresses the relationship between policy and security, examining several types of policies in the process. Implementation I covers cryptography and its role in security. Implementation II describes how to apply policy requirements in systems. The Assurance section, which Elisabeth Sullivan wrote, introduces assurance basics and formal methods. The Special Topics section discusses malicious logic, vulnerability analysis , auditing, and intrusion detection. Finally, the Practicum ties all the previously discussed material to real-world examples. A ninth additional section, called End Matter, discusses miscellaneous supporting mathematical topics and concludes with an example. At a publisher's list price of US$74.99, you'll want to know why you should consider buying such an expensive book. Several things set it apart from other, similar, offerings. Most importantly , the book provides numerous examples and, refreshingly, definitions. A vertical bar alongside the examples distinguishes them from other text, so picking them out is easy. The book also includes a bibliography of over 1,000 references. Additionally, each chapter includes a summary, suggestions for further reading, research issues, and practice exercises. The format and layout are good, and the fonts are readable. The book is aimed at several audiences , and the preface describes many roadmaps, one of which discusses dependencies among the various chapters. Instructors can use it at the advanced undergraduate level or for introductory graduate-level computer-security courses. The preface also includes a mapping of suggested topics for undergraduate and graduate courses, presuming a certain amount of math and theoretical computer-science background as prerequisites. Practitioners can use the book as a resource for information on specific topics; the examples in the Practicum are ideally suited for them. So, what's the final verdict? Practitioners will want to consider this book as a reference to add to their bookshelves. Teachers of advanced undergraduate or introductory …",
"title": ""
},
{
"docid": "290796519b7757ce7ec0bf4d37290eed",
"text": "A freely available English thesaurus of related words is presented that has been automatically compiled by analyzing the distributional similarities of words in the British National Corpus. The quality of the results has been evaluated by comparison with human judgments as obtained from non-native and native speakers of English who were asked to provide rankings of word similarities. According to this measure, the results generated by our system are better than the judgments of the non-native speakers and come close to the native speakers’ performance. An advantage of our approach is that it does not require syntactic parsing and therefore can be more easily adapted to other languages. As an example, a similar thesaurus for German has already been completed.",
"title": ""
},
{
"docid": "f83f5eaa47f4634311297886b8e2228c",
"text": "Purpose of this study is to determine whether cash flow impacts business failure prediction using the BP models (Altman z-score, or Neural Network, or any of the BP models which could be implemented having objective to predict the financial distress or more complex financial failure-bankruptcy of the banks or companies). Units of analysis are financial ratios derived from raw financial data: B/S, P&L statements (income statements) and cash flow statements of both failed and non-failed companies/corporates that have been collected from the auditing resources and reports performed. A number of these studies examined whether a cash flow improve the prediction of business failure. The authors would have the objective to show the evidence and usefulness and efficacy of statistical models such as Altman Z-score discriminant analysis bankruptcy predictive models to assess client on going concern status. Failed and non-failed companies were selected for analysis to determine whether the cash flow improves the business failure prediction aiming to proof that the cash flow certainly makes better financial distress and bankruptcy prediction possible. Key-Words: bankruptcy prediction, financial distress, financial crisis, transition economy, auditing statement, balance sheet, profit and loss accounts, income statements",
"title": ""
},
{
"docid": "6ecc241a25fdbf30a0f6e31c4a6f3361",
"text": "Widespread personalized computing systems play an already important and fast-growing role in diverse contexts, such as location-based services, recommenders, commercial Web-based services, and teaching systems. The personalization in these systems is driven by information about the user, a user model. Moreover, as computers become both ubiquitous and pervasive, personalization operates across the many devices and information stores that constitute the user's personal digital ecosystem. This enables personalization, and the user models driving it, to play an increasing role in people's everyday lives. This makes it critical to establish ways to address key problems of personalization related to privacy, invisibility of personalization, errors in user models, wasted user models, and the broad issue of enabling people to control their user models and associated personalization. We offer scrutable user models as a foundation for tackling these problems.\n This article argues the importance of scrutable user modeling and personalization, illustrating key elements in case studies from our work. We then identify the broad roles for scrutable user models. The article describes how to tackle the technical and interface challenges of designing and building scrutable user modeling systems, presenting design principles and showing how they were established over our twenty years of work on the Personis software framework. Our contributions are the set of principles for scrutable personalization linked to our experience from creating and evaluating frameworks and associated applications built upon them. These constitute a general approach to tackling problems of personalization by enabling users to scrutinize their user models as a basis for understanding and controlling personalization.",
"title": ""
},
{
"docid": "ea49d288ffefd549f77519c90de51fbc",
"text": "Text line detection is a prerequisite procedure of mathematical formula recognition, however, many incorrectly segmented text lines are often produced due to the two-dimensional structures of mathematics when using existing segmentation methods such as Projection Profiles Cutting or white space analysis. In consequence, mathematical formula recognition is adversely affected by these incorrectly detected text lines, with errors propagating through further processes. Aimed at mathematical formula recognition, we propose a text line detection method to produce reliable line segmentation. Based on the results produced by PPC, a learning based merging strategy is presented to combine incorrectly split text lines. In the merging strategy, the features of layout and text for a text line and those between successive lines are utilised to detect the incorrectly split text lines. Experimental results show that the proposed approach obtains good performance in detecting text lines from mathematical documents. Furthermore, the error rate in mathematical formula identification is reduced significantly through adopting the proposed text line detection method.",
"title": ""
},
{
"docid": "05fc7d05e4ea933a47f5fe81d68cf876",
"text": "The unprecedented success of deep learning is largely dependent on the availability of massive amount of training data. In many cases, these data are crowd-sourced and may contain sensitive and confidential information, therefore, pose privacy concerns. As a result, privacy-preserving deep learning has been gaining increasing focus nowadays. One of the promising approaches for privacy-preserving deep learning is to employ differential privacy during model training which aims to prevent the leakage of sensitive information about the training data via the trained model. While these models are considered to be immune to privacy attacks, with the advent of recent and sophisticated attack models, it is not clear how well these models trade-off utility for privacy. In this paper, we systematically study the impact of a sophisticated machine learning based privacy attack called the membership inference attack against a state-of-the-art differentially private deep model. More specifically, given a differentially private deep model with its associated utility, we investigate how much we can infer about the model’s training data. Our experimental results show that differentially private deep models may keep their promise to provide privacy protection against strong adversaries by only offering poor model utility, while exhibit moderate vulnerability to the membership inference attack when they offer an acceptable utility. For evaluating our experiments, we use the CIFAR-10 and MNIST datasets and the corresponding classification tasks.",
"title": ""
},
{
"docid": "165fa890775b64cb923e959824f183f5",
"text": "We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.",
"title": ""
},
{
"docid": "c9be394df8b4827c57c5413fc28b47e8",
"text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.",
"title": ""
},
{
"docid": "164bedabbfcfba283ab26a01511e8777",
"text": "The airline industry is undergoing a very difficult time and many companies are in search of service segmentation strategies that will satisfy different target market segments. This study attempts to identify the service dimensions that matter most to current airline passengers. The research measures and compares differences in passengers’ expectations of the desired airline service quality in terms of the dimensions of reliability; assurance; facilities; employees; flight patterns; customization and responsiveness. Primary data were collected from passengers departing Hong Kong airport. Regarding the service dimension expectations, differences analysis shows that there are no statistically significant differences between passengers who made their own airline choice (decision makers) and those who did not (non-decision makers). However, there are significant differences among passengers of different ethnic groups/nationalities as well as among passengers who travel for different purposes, such as business, holiday and visiting friends/relatives. The findings also indicate that passengers consistently rank ‘assurance’ as the most important service dimension. This indicates that passengers are concerned about the safety and security aspect and this may indicate why there has been such a downturn in demand as this study was conducted just prior to the World Trade Center incident on the 11th September 2001. r 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "d18faf207a0dbccc030e5dcc202949ab",
"text": "This manuscript conducts a comparison on modern object detection systems in their ability to detect multiple maritime vessel classes. Three highly scoring algorithms from the Pascal VOC Challenge, Histogram of Oriented Gradients by Dalal and Triggs, Exemplar-SVM by Malisiewicz, and Latent-SVM with Deformable Part Models by Felzenszwalb, were compared to determine performance of recognition within a specific category rather than the general classes from the original challenge. In all cases, the histogram of oriented edges was used as the feature set and support vector machines were used for classification. A summary and comparison of the learning algorithms is presented and a new image corpus of maritime vessels was collected. Precision-recall results show improved recognition performance is achieved when accounting for vessel pose. In particular, the deformable part model has the best performance when considering the various components of a maritime vessel.",
"title": ""
},
{
"docid": "0b024671e04090051292b5e76a4690ae",
"text": "The brain has evolved in this multisensory context to perceive the world in an integrated fashion. Although there are good reasons to be skeptical of the influence of cognition on perception, here we argue that the study of sensory substitution devices might reveal that perception and cognition are not necessarily distinct, but rather continuous aspects of our information processing capacities.",
"title": ""
},
{
"docid": "cd8c1c24d4996217c8927be18c48488f",
"text": "Recurrent neural networks (RNNs), such as long short-term memory networks (LSTMs), serve as a fundamental building block for many sequence learning tasks, including machine translation, language modeling, and question answering. In this paper, we consider the specific problem of word-level language modeling and investigate strategies for regularizing and optimizing LSTMbased models. We propose the weight-dropped LSTM which uses DropConnect on hidden-tohidden weights as a form of recurrent regularization. Further, we introduce NT-ASGD, a variant of the averaged stochastic gradient method, wherein the averaging trigger is determined using a non-monotonic condition as opposed to being tuned by the user. Using these and other regularization strategies, we achieve state-of-the-art word level perplexities on two data sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the effectiveness of a neural cache in conjunction with our proposed model, we achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and 52.0 on WikiText-2.",
"title": ""
},
{
"docid": "280d9caa58ec97e5b0866d90b22dd35a",
"text": "Term structures of default probabilities are omnipresent in credit risk modeling: time-dynamic credit portfolio models, default times, and multi-year pricing models, they all need the time evolution of default probabilities as a basic model input. Although people tend to believe that from an economic point of view the Markov property as underlying model assumption is kind of questionable it seems to be common market practice to model PD term structures via Markov chain techniques. In this paper we illustrate that the Markov assumption carries us quite far if we allow for nonhomogeneous time behaviour of the Markov chain generating the PD term structures. As a ‘proof of concept’ we calibrate a nonhomogeneous continuous-time Markov chain (NHCTMC) to observed one-year rating migrations and multi-year default frequencies, hereby achieving convincing approximation quality. 1 Markov Chains in Credit Risk Modeling The probability of default (PD) for a client is a fundamental risk parameter in credit risk management. It is common practice to assign to every rating grade in a bank’s masterscale a one-year PD in line with regulatory requirements; see [1]. Table 1 shows an example for default frequencies assigned to rating grades from Standard and Poor’s (S&P). D AAA 0.00% AA 0.01% A 0.04% BBB 0.29% BB 1.28% B 6.24% CCC 32.35% Table 1: One-year default frequencies (D) assigned to S&P ratings; see [17], Table 9. Moreover, credit risk modeling concepts like dependent default times, multi-year credit pricing, and multi-horizon economic capital require more than just one-year PDs. For multi-year credit risk modeling, banks need a whole term structure (p R )t≥0 of (cumulative) PDs for every rating grade R; see, e.g., [2] for an introduction to PD term structures and [3] for their application to structured credit products. Every bank has its own (proprietary) way to calibrate PD term structures to bank-internal and external data. A look into the literature reveals that for the generation of PD term structures various Markov chain approaches, most often based on homogeneous chains, dominate current market practice. A landmarking paper in this direction is the work by Jarrow, Lando, and Turnbull [7]. Further research has been done by various authors, see, e.g., Kadam [8], Lando [10], Sarfaraz et al. [12], Schuermann and Jafry [14, 15], Trueck and Oezturkmen [18], just to mention a few examples. A new approach via Markov mixtures has been presented recently by Frydman and Schuermann [5]. In Markov chain theory (see [11]) one distinguishes between discrete-time and continuous-time chains. For instance, a discrete-time chain can be specified by a one-year migration or transition 1In the literature, PD term structures are sometimes called credit curves. 2A Markov chain is called homogeneous if transition probabilities do not depend on time.",
"title": ""
}
] | scidocsrr |
03a5fd34d6ba199433ce53b959802b23 | Unified Point-of-Interest Recommendation with Temporal Interval Assessment | [
{
"docid": "7e6182248b3c3d7dedce16f8bfa58b28",
"text": "In this paper, we aim to provide a point-of-interests (POI) recommendation service for the rapid growing location-based social networks (LBSNs), e.g., Foursquare, Whrrl, etc. Our idea is to explore user preference, social influence and geographical influence for POI recommendations. In addition to deriving user preference based on user-based collaborative filtering and exploring social influence from friends, we put a special emphasis on geographical influence due to the spatial clustering phenomenon exhibited in user check-in activities of LBSNs. We argue that the geographical influence among POIs plays an important role in user check-in behaviors and model it by power law distribution. Accordingly, we develop a collaborative recommendation algorithm based on geographical influence based on naive Bayesian. Furthermore, we propose a unified POI recommendation framework, which fuses user preference to a POI with social influence and geographical influence. Finally, we conduct a comprehensive performance evaluation over two large-scale datasets collected from Foursquare and Whrrl. Experimental results with these real datasets show that the unified collaborative recommendation approach significantly outperforms a wide spectrum of alternative recommendation approaches.",
"title": ""
}
] | [
{
"docid": "82c4aa6bc189e011556ca7aa6d1688b9",
"text": "Two aspects of children’s early gender development the spontaneous production of gender labels and sex-typed play were examined longitudinally in a sample of 82 children. Survival analysis, a statistical technique well suited to questions involving developmental transitions, was used to investigate the timing of the onset of children’s gender labeling as based on mothers’ biweekly reports on their children’s language from 9 through 21 months. Videotapes of children’s play both alone and with mother at 17 and 21 months were independently analyzed for play with gender stereotyped and neutral toys. Finally, the relation between gender labeling and sex-typed play was examined. Children transitioned to using gender labels at approximately 19 months on average. Although girls and boys showed similar patterns in the development of gender labeling, girls began labeling significantly earlier than boys. Modest sex differences in play were present at 17 months and increased at 21 months. Gender labeling predicted increases in sex-typed play, suggesting that knowledge of gender categories might influence sex-typing before the age of 2.",
"title": ""
},
{
"docid": "b1cb31c70acb17d353116783845f85f5",
"text": "Wireless sensor networks have become increasingly popular due to their wide range of applications. Energy consumption is one of the biggest constraints of the wireless sensor node and this limitation combined with a typical deployment of large number of nodes have added many challenges to the design and management of wireless sensor networks. They are typically used for remote environment monitoring in areas where providing electrical power is difficult. Therefore, the devices need to be powered by batteries and alternative energy sources. Because battery energy is limited, the use of different techniques for energy saving is one of the hottest topics in WSNs. In this work, we present a survey of power saving and energy optimization techniques for wireless sensor networks, which enhances the ones in existence and introduces the reader to the most well known available methods that can be used to save energy. They are analyzed from several points of view: Device hardware, transmission, MAC and routing protocols.",
"title": ""
},
{
"docid": "0a45c122c6995df91f03f8615f4668d1",
"text": "The advanced microgrid is envisioned to be a critical part of the future smart grid because of its local intelligence, automation, interoperability, and distributed energy resources (DER) hosting capability. The enabling technology of advanced microgrids is the microgrid management system (MGMS). In this article, we discuss and review the concept of the MGMS and state-of-the-art solutions regarding centralized and distributed MGMSs in the primary, secondary, and tertiary levels, from which we observe a general tendency toward decentralization.",
"title": ""
},
{
"docid": "3c667426c8dcea8e7813e9eef23a1e15",
"text": "Radio spectrum has become a precious resource, and it has long been the dream of wireless communication engineers to maximize the utilization of the radio spectrum. Dynamic Spectrum Access (DSA) and Cognitive Radio (CR) have been considered promising to enhance the efficiency and utilization of the spectrum. In current overlay cognitive radio, spectrum sensing is first performed to detect the spectrum holes for the secondary user to harness. However, in a more sophisticated cognitive radio, the secondary user needs to detect more than just the existence of primary users and spectrum holes. For example, in a hybrid overlay/underlay cognitive radio, the secondary use needs to detect the transmission power and localization of the primary users as well. In this paper, we combine the spectrum sensing and primary user power/localization detection together, and propose to jointly detect not only the existence of primary users but the power and localization of them via compressed sensing. Simulation results including the miss detection probability (MDP), false alarm probability (FAP) and reconstruction probability (RP) confirm the effectiveness and robustness of the proposed method.",
"title": ""
},
{
"docid": "534fd7868826681596586f00f47cd819",
"text": "Locally weighted projection regression is a new algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its core, it uses locally linear models, spanned by a small number of univariate regressions in selected directions in input space. This paper evaluates different methods of projection regression and derives a nonlinear function approximator based on them. This nonparametric local learning system i) learns rapidly with second order learning methods based on incremental training, ii) uses statistically sound stochastic cross validation to learn iii) adjusts its weighting kernels based on local information only, iv) has a computational complexity that is linear in the number of inputs, and v) can deal with a large number of possibly redundant inputs, as shown in evaluations with up to 50 dimensional data sets. To our knowledge, this is the first truly incremental spatially localized learning method to combine all these properties.",
"title": ""
},
{
"docid": "ca768eb654b323354b7d78969162cb81",
"text": "Hyper-redundant manipulators can be fragile, expensive, and limited in their flexibility due to the distributed and bulky actuators that are typically used to achieve the precision and degrees of freedom (DOFs) required. Here, a manipulator is proposed that is robust, high-force, low-cost, and highly articulated without employing traditional actuators mounted at the manipulator joints. Rather, local tunable stiffness is coupled with off-board spooler motors and tension cables to achieve complex manipulator configurations. Tunable stiffness is achieved by reversible jamming of granular media, which-by applying a vacuum to enclosed grains-causes the grains to transition between solid-like states and liquid-like ones. Experimental studies were conducted to identify grains with high strength-to-weight performance. A prototype of the manipulator is presented with performance analysis, with emphasis on speed, strength, and articulation. This novel design for a manipulator-and use of jamming for robotic applications in general-could greatly benefit applications such as human-safe robotics and systems in which robots need to exhibit high flexibility to conform to their environments.",
"title": ""
},
{
"docid": "b91291a9b64ef7668633c2a3df82285a",
"text": "Recent work has managed to learn crosslingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora or closely-related languages, and we show that they often fail in more realistic scenarios. This work proposes an alternative approach based on a fully unsupervised initialization that explicitly exploits the structural similarity of the embeddings, and a robust self-learning algorithm that iteratively improves this solution. Our method succeeds in all tested scenarios and obtains the best published results in standard datasets, even surpassing previous supervised systems. Our implementation is released as an open source project at https://github. com/artetxem/vecmap.",
"title": ""
},
{
"docid": "50c3a6e263dcfec4faab370afdb17dfd",
"text": "Most state-of-the-art methods for representation learning are supervised, which require a large number of labeled data. This paper explores a novel unsupervised approach for learning visual representation. We introduce an image-wise discrimination criterion in addition to a pixel-wise reconstruction criterion to model both individual images and the difference between original images and reconstructed ones during neural network training. These criteria induce networks to focus on not only local features but also global high-level representations, so as to provide a competitive alternative to supervised representation learning methods, especially in the case of limited labeled data. We further introduce a competition mechanism to drive each component to increase its capability to win its adversary. In this way, the identity of representations and the likeness of reconstructed images to original ones are alternately improved. Experimental results on several tasks demonstrate the effectiveness of our approach.",
"title": ""
},
{
"docid": "bb089ffa37487912234ec0bab057605b",
"text": "Maps are a key component in image-based camera localization and visual SLAM systems: they are used to establish geometric constraints between images, correct drift in relative pose estimation, and relocalize cameras after lost tracking. The exact definitions of maps, however, are often application-specific and hand-crafted for different scenarios (e.g. 3D landmarks, lines, planes, bags of visual words). We propose to represent maps as a deep neural net called MapNet, which enables learning a data-driven map representation. Unlike prior work on learning maps, MapNet exploits cheap and ubiquitous sensory inputs like visual odometry and GPS in addition to images and fuses them together for camera localization. Geometric constraints expressed by these inputs, which have traditionally been used in bundle adjustment or pose-graph optimization, are formulated as loss terms in MapNet training and also used during inference. In addition to directly improving localization accuracy, this allows us to update the MapNet (i.e., maps) in a self-supervised manner using additional unlabeled video sequences from the scene. We also propose a novel parameterization for camera rotation which is better suited for deep-learning based camera pose regression. Experimental results on both the indoor 7-Scenes dataset and the outdoor Oxford RobotCar dataset show significant performance improvement over prior work. The MapNet project webpage is https://goo.gl/mRB3Au.",
"title": ""
},
{
"docid": "b26a9a78f11227e894af0e58b3b01c98",
"text": "Although all the cells in an organism contain the same genetic information, differences in the cell phenotype arise from the expression of lineage-specific genes. During myelopoiesis, external differentiating signals regulate the expression of a set of transcription factors. The combined action of these transcription factors subsequently determines the expression of myeloid-specific genes and the generation of monocytes and macrophages. In particular, the transcription factor PU.1 has a critical role in this process. We review the contribution of several transcription factors to the control of macrophage development.",
"title": ""
},
{
"docid": "515e2b726f0e5e7ceb5938fa5d917694",
"text": "Text preprocessing and segmentation are critical tasks in search and text mining applications. Due to the huge amount of documents that are exclusively presented in PDF format, most of the Data Mining (DM) and Information Retrieval (IR) systems must extract content from the PDF files. In some occasions this is a difficult task: the result of the extraction process from a PDF file is plain text, and it should be returned in the same order as a human would read the original PDF file. However, current tools for PDF text extraction fail in this objective when working with complex documents with multiple columns. For instance, this is the case of official government bulletins with legal information. In this task, it is mandatory to get correct and ordered text as a result of the application of the PDF extractor. It is very usual that a legal article in a document refers to a previous article and they should be offered in the right sequential order. To overcome these difficulties we have designed a new method for extraction of text in PDFs that simulates the human reading order. We evaluated our method and compared it against other PDF extraction tools and algorithms. Evaluation of our approach shows that it significantly outperforms the results of the existing tools and algorithms.",
"title": ""
},
{
"docid": "b67e6d5ee2451912ea6267cbc5274440",
"text": "The paper presents theoretical analyses, simulations and design of a PTAT (proportional to absolute temperature) temperature sensor that is based on the vertical PNP structure and dedicated to CMOS VLSI circuits. Performed considerations take into account specific properties of materials that forms electronic elements. The electrothermal simulations are performed in order to verify the unwanted self-heating effect of the sensor",
"title": ""
},
{
"docid": "3953962740dd06ad2cadbb5d6b7c2cef",
"text": "The latest election cycle generated sobering examples of the threat that fake news poses to democracy. Primarily disseminated by hyper-partisan media outlets, fake news proved capable of becoming viral sensations that can dominate social media and influence elections. To address this problem, we begin with stance detection, which is a first step towards identifying fake news. The goal of this project is to identify whether given headline-article pairs: (1) agree, (2) disagree, (3) discuss the same topic, or (4) are not related at all, as described in [1]. Our method feeds the headline-article pairs into a bidirectional LSTM which first analyzes the article and then uses the acquired article representation to analyze the headline. On top of the output of the conditioned bidirectional LSTM, we concatenate global statistical features extracted from the headline-article pairs. We report a 9.7% improvement in the Fake News Challenge evaluation metric and a 22.7% improvement in mean F1 compared to the highest scoring baseline. We also present qualitative results that show how our method outperforms state-of-the art algorithms on this challenge.",
"title": ""
},
{
"docid": "efde92d1e86ff0b5f91b006521935621",
"text": "Sizing equations for electrical machinery are developed from basic principles. The technique provides new insights into: 1. The effect of stator inner and outer diameters. 2. The amount of copper and steel used. 3. A maximizing function. 4. Equivalent slot dimensions in terms of diameters and flux density distribution. 5. Pole number effects. While the treatment is analytical, the scope is broad and intended to assist in the design of electrical machinery. Examples are given showing how the machine's internal geometry can assume extreme proportions through changes in basic variables.",
"title": ""
},
{
"docid": "b16d8dddf037e60ba9121f85e7d9b45a",
"text": "Bike sharing systems, aiming at providing the missing links in public transportation systems, are becoming popular in urban cities. A key to success for a bike sharing systems is the effectiveness of rebalancing operations, that is, the efforts of restoring the number of bikes in each station to its target value by routing vehicles through pick-up and drop-off operations. There are two major issues for this bike rebalancing problem: the determination of station inventory target level and the large scale multiple capacitated vehicle routing optimization with outlier stations. The key challenges include demand prediction accuracy for inventory target level determination, and an effective optimizer for vehicle routing with hundreds of stations. To this end, in this paper, we develop a Meteorology Similarity Weighted K-Nearest-Neighbor (MSWK) regressor to predict the station pick-up demand based on large-scale historic trip records. Based on further analysis on the station network constructed by station-station connections and the trip duration, we propose an inter station bike transition (ISBT) model to predict the station drop-off demand. Then, we provide a mixed integer nonlinear programming (MINLP) formulation of multiple capacitated bike routing problem with the objective of minimizing total travel distance. To solve it, we propose an Adaptive Capacity Constrained K-centers Clustering (AdaCCKC) algorithm to separate outlier stations (the demands of these stations are very large and make the optimization infeasible) and group the rest stations into clusters within which one vehicle is scheduled to redistribute bikes between stations. In this way, the large scale multiple vehicle routing problem is reduced to inner cluster one vehicle routing problem with guaranteed feasible solutions. Finally, the extensive experimental results on the NYC Citi Bike system show the advantages of our approach for bike demand prediction and large-scale bike rebalancing optimization.",
"title": ""
},
{
"docid": "381c02fb1ce523ddbdfe3acdde20abf1",
"text": "Domain-specific accelerators (DSAs), which sacrifice programmability for efficiency, are a reaction to the waning benefits of device scaling. This article demonstrates that there are commonalities between DSAs that can be exploited with programmable mechanisms. The goals are to create a programmable architecture that can match the benefits of a DSA and to create a platform for future accelerator investigations.",
"title": ""
},
{
"docid": "3e26fe227e8c270fda4fe0b7d09b2985",
"text": "With the recent emergence of mobile platforms capable of executing increasingly complex software and the rising ubiquity of using mobile platforms in sensitive applications such as banking, there is a rising danger associated with malware targeted at mobile devices. The problem of detecting such malware presents unique challenges due to the limited resources avalible and limited privileges granted to the user, but also presents unique opportunity in the required metadata attached to each application. In this article, we present a machine learning-based system for the detection of malware on Android devices. Our system extracts a number of features and trains a One-Class Support Vector Machine in an offline (off-device) manner, in order to leverage the higher computing power of a server or cluster of servers.",
"title": ""
},
{
"docid": "3682143e9cfe7dd139138b3b533c8c25",
"text": "In brushless excitation systems, the rotating diodes can experience open- or short-circuits. For a three-phase synchronous generator under no-load, we present theoretical development of effects of diode failures on machine output voltage. Thereby, we expect the spectral response faced with each fault condition, and we propose an original algorithm for state monitoring of rotating diodes. Moreover, given experimental observations of the spectral behavior of stray flux, we propose an alternative technique. Laboratory tests have proven the effectiveness of the proposed methods for detection of fault diodes, even when the generator has been fully loaded. However, their ability to distinguish between cases of diodes interrupted and short-circuited, has been limited to the no-load condition, and certain loads of specific natures.",
"title": ""
},
{
"docid": "bb0731a3bc69ddfe293fb1feb096f5f2",
"text": "To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., \"download\") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., \"malware\", \"download\") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.",
"title": ""
},
{
"docid": "e6ac100eb695e089e22defcba01fae41",
"text": "Recent advances in video super-resolution have shown that convolutional neural networks combined with motion compensation are able to merge information from multiple low-resolution (LR) frames to generate high-quality images. Current state-of-the-art methods process a batch of LR frames to generate a single high-resolution (HR) frame and run this scheme in a sliding window fashion over the entire video, effectively treating the problem as a large number of separate multi-frame super-resolution tasks. This approach has two main weaknesses: 1) Each input frame is processed and warped multiple times, increasing the computational cost, and 2) each output frame is estimated independently conditioned on the input frames, limiting the system's ability to produce temporally consistent results. In this work, we propose an end-to-end trainable frame-recurrent video super-resolution framework that uses the previously inferred HR estimate to super-resolve the subsequent frame. This naturally encourages temporally consistent results and reduces the computational cost by warping only one image in each step. Furthermore, due to its recurrent nature, the proposed method has the ability to assimilate a large number of previous frames without increased computational demands. Extensive evaluations and comparisons with previous methods validate the strengths of our approach and demonstrate that the proposed framework is able to significantly outperform the current state of the art.",
"title": ""
}
] | scidocsrr |
ed989dd8908467e1038ee95aa0392a27 | STEM education K-12: perspectives on integration | [
{
"docid": "aabed671a466730e273225d8ee572f73",
"text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.",
"title": ""
}
] | [
{
"docid": "5d447d516e8f2db2e9d9943972b4b0d1",
"text": "Autonomous robot manipulation often involves both estimating the pose of the object to be manipulated and selecting a viable grasp point. Methods using RGB-D data have shown great success in solving these problems. However, there are situations where cost constraints or the working environment may limit the use of RGB-D sensors. When limited to monocular camera data only, both the problem of object pose estimation and of grasp point selection are very challenging. In the past, research has focused on solving these problems separately. In this work, we introduce a novel method called SilhoNet that bridges the gap between these two tasks. We use a Convolutional Neural Network (CNN) pipeline that takes in region of interest (ROI) proposals to simultaneously predict an intermediate silhouette representation for objects with an associated occlusion mask. The 3D pose is then regressed from the predicted silhouettes. Grasp points from a precomputed database are filtered by back-projecting them onto the occlusion mask to find which points are visible in the scene. We show that our method achieves better overall performance than the state-of-the art PoseCNN network for 3D pose estimation on the YCB-video dataset.",
"title": ""
},
{
"docid": "3ccc5fd5bbf570a361b40afca37cec92",
"text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "892f6150dc4eef8ffaa419cf0ca69532",
"text": "Symmetric ankle propulsion is the cornerstone of efficient human walking. The ankle plantar flexors provide the majority of the mechanical work for the step-to-step transition and much of this work is delivered via elastic recoil from the Achilles' tendon — making it highly efficient. Even though the plantar flexors play a central role in propulsion, body-weight support and swing initiation during walking, very few assistive devices have focused on aiding ankle plantarflexion. Our goal was to develop a portable ankle exoskeleton taking inspiration from the passive elastic mechanisms at play in the human triceps surae-Achilles' tendon complex during walking. The challenge was to use parallel springs to provide ankle joint mechanical assistance during stance phase but allow free ankle rotation during swing phase. To do this we developed a novel ‘smart-clutch’ that can engage and disengage a parallel spring based only on ankle kinematic state. The system is purely passive — containing no motors, electronics or external power supply. This ‘energy-neutral’ ankle exoskeleton could be used to restore symmetry and reduce metabolic energy expenditure of walking in populations with weak ankle plantar flexors (e.g. stroke, spinal cord injury, normal aging).",
"title": ""
},
{
"docid": "5507f3199296478abbc6e106943a53ba",
"text": "Hiding a secret is needed in many situations. One might need to hide a password, an encryption key, a secret recipe, and etc. Information can be secured with encryption, but the need to secure the secret key used for such encryption is important too. Imagine you encrypt your important files with one secret key and if such a key is lost then all the important files will be inaccessible. Thus, secure and efficient key management mechanisms are required. One of them is secret sharing scheme (SSS) that lets you split your secret into several parts and distribute them among selected parties. The secret can be recovered once these parties collaborate in some way. This paper will study these schemes and explain the need for them and their security. Across the years, various schemes have been presented. This paper will survey some of them varying from trivial schemes to threshold based ones. Explanations on these schemes constructions are presented. The paper will also look at some applications of SSS.",
"title": ""
},
{
"docid": "0b22284d575fb5674f61529c367bb724",
"text": "The scapula fulfils many roles to facilitate optimal function of the shoulder. Normal function of the shoulder joint requires a scapula that can be properly aligned in multiple planes of motion of the upper extremity. Scapular dyskinesis, meaning abnormal motion of the scapula during shoulder movement, is a clinical finding commonly encountered by shoulder surgeons. It is best considered an impairment of optimal shoulder function. As such, it may be the underlying cause or the accompanying result of many forms of shoulder pain and dysfunction. The present review looks at the causes and treatment options for this indicator of shoulder pathology and aims to provide an overview of the management of disorders of the scapula.",
"title": ""
},
{
"docid": "928f64f8ef9b3ea5e107ae9c49840b2c",
"text": "Mass spectrometry-based proteomics has greatly benefitted from enormous advances in high resolution instrumentation in recent years. In particular, the combination of a linear ion trap with the Orbitrap analyzer has proven to be a popular instrument configuration. Complementing this hybrid trap-trap instrument, as well as the standalone Orbitrap analyzer termed Exactive, we here present coupling of a quadrupole mass filter to an Orbitrap analyzer. This \"Q Exactive\" instrument features high ion currents because of an S-lens, and fast high-energy collision-induced dissociation peptide fragmentation because of parallel filling and detection modes. The image current from the detector is processed by an \"enhanced Fourier Transformation\" algorithm, doubling mass spectrometric resolution. Together with almost instantaneous isolation and fragmentation, the instrument achieves overall cycle times of 1 s for a top 10 higher energy collisional dissociation method. More than 2500 proteins can be identified in standard 90-min gradients of tryptic digests of mammalian cell lysate- a significant improvement over previous Orbitrap mass spectrometers. Furthermore, the quadrupole Orbitrap analyzer combination enables multiplexed operation at the MS and tandem MS levels. This is demonstrated in a multiplexed single ion monitoring mode, in which the quadrupole rapidly switches among different narrow mass ranges that are analyzed in a single composite MS spectrum. Similarly, the quadrupole allows fragmentation of different precursor masses in rapid succession, followed by joint analysis of the higher energy collisional dissociation fragment ions in the Orbitrap analyzer. High performance in a robust benchtop format together with the ability to perform complex multiplexed scan modes make the Q Exactive an exciting new instrument for the proteomics and general analytical communities.",
"title": ""
},
{
"docid": "12dd3762060fd2e85732cd1807c7e5dc",
"text": "Context: Topic modeling finds human-readable structures in unstructured textual data. A widely used topic modeler is Latent Dirichlet allocation. When run on different datasets, LDA suffers from “order effects” i.e. different topics are generated if the order of training data is shuffled. Such order effects introduce a systematic error for any study. This error can relate to misleading results; specifically, inaccurate topic descriptions and a reduction in the efficacy of text mining classification results. Objective: To provide a method in which distributions generated by LDA are more stable and can be used for further analysis. Method: We use LDADE, a search-based software engineering tool that tunes LDA’s parameters using DE (Differential Evolution). LDADE is evaluated on data from a programmer information exchange site (Stackoverflow), title and abstract text of thousands of Software Engineering (SE) papers, and software defect reports from NASA. Results were collected across different implementations of LDA (Python+Scikit-Learn, Scala+Spark); across different platforms (Linux, Macintosh) and for different kinds of LDAs (VEM, or using Gibbs sampling). Results were scored via topic stability and text mining classification accuracy. Results: In all treatments: (i) standard LDA exhibits very large topic instability; (ii) LDADE’s tunings dramatically reduce cluster instability; (iii) LDADE also leads to improved performances for supervised as well as unsupervised learning. Conclusion: Due to topic instability, using standard LDA with its “off-the-shelf” settings should now be depreciated. Also, in future, we should require SE papers that use LDA to test and (if needed) mitigate LDA topic instability. Finally, LDADE is a candidate technology for effectively and efficiently reducing that instability.",
"title": ""
},
{
"docid": "bd3b9d9e8a1dc39f384b073765175de6",
"text": "We generalize the stochastic block model to the important case in which edges are annotated with weights drawn from an exponential family distribution. This generalization introduces several technical difficulties for model estimation, which we solve using a Bayesian approach. We introduce a variational algorithm that efficiently approximates the model’s posterior distribution for dense graphs. In specific numerical experiments on edge-weighted networks, this weighted stochastic block model outperforms the common approach of first applying a single threshold to all weights and then applying the classic stochastic block model, which can obscure latent block structure in networks. This model will enable the recovery of latent structure in a broader range of network data than was previously possible.",
"title": ""
},
{
"docid": "286f7edf797040089d2adb667aaabc00",
"text": "We describe and compare three predominant email sender authentication mechanisms based on DNS: SPF, DKIM and Sender-ID Framework (SIDF). These mechanisms are designed mainly to assist in filtering of undesirable email messages, in particular spam and phishing emails. We clarify the limitations of these mechanisms, identify risks, and make recommendations. In particular, we argue that, properly used, SPF and DKIM can both help improve the efficiency and accuracy of email filtering.",
"title": ""
},
{
"docid": "683e496bd08fe3a55c63ba8788481184",
"text": "Ubicomp products have become more important in providing emotional experiences as users increasingly assimilate these products into their everyday lives. In this paper, we explored a new design perspective by applying a pet dog analogy to support emotional experience with ubicomp products. We were inspired by pet dogs, which are already intimate companions to humans and serve essential emotional functions in daily live. Our studies involved four phases. First, through our literature review, we articulated the key characteristics of pet dogs that apply to ubicomp products. Secondly, we applied these characteristics to a design case, CAMY, a mixed media PC peripheral with a camera. Like a pet dog, it interacts emotionally with a user. Thirdly, we conducted a user study with CAMY, which showed the effects of pet-like characteristics on users' emotional experiences, specifically on intimacy, sympathy, and delightedness. Finally, we presented other design cases and discussed the implications of utilizing a pet dog analogy to advance ubicomp systems for improved user experiences.",
"title": ""
},
{
"docid": "4db2110c6030c7d19e59dfe8d42cf8f1",
"text": "Extracellular vesicles (EVs) are membrane-enclosed vesicles that are released into the extracellular environment by various cell types, which can be classified as apoptotic bodies, microvesicles and exosomes. EVs have been shown to carry DNA, small RNAs, proteins and membrane lipids which are derived from the parental cells. Recently, several studies have demonstrated that EVs can regulate many biological processes, such as cancer progression, the immune response, cell proliferation, cell migration and blood vessel tube formation. This regulation is achieved through the release and transport of EVs and the transfer of their parental cell-derived molecular cargo to recipient cells. This thereby influences various physiological and sometimes pathological functions within the target cells. While intensive investigation of EVs has focused on pathological processes, the involvement of EVs in normal wound healing is less clear; however, recent preliminarily investigations have produced some initial insights. This review will provide an overview of EVs and discuss the current literature regarding the role of EVs in wound healing, especially, their influence on coagulation, cell proliferation, migration, angiogenesis, collagen production and extracellular matrix remodelling.",
"title": ""
},
{
"docid": "6ce94fa6f50d9ee27d9997abd7671e8a",
"text": "STUDY DESIGN\nThis study used a prospective, single-group repeated-measures design to analyze differences between the electromyographic (EMG) amplitudes produced by exercises for the trapezius and serratus anterior muscles.\n\n\nOBJECTIVE\nTo identify high-intensity exercises that elicit the greatest level of EMG activity in the trapezius and serratus anterior muscles.\n\n\nBACKGROUND\nThe trapezius and serratus anterior muscles are considered to be the only upward rotators of the scapula and are important for normal shoulder function. Electromyographic studies have been performed for these muscles during active and low-intensity exercises, but they have not been analyzed during high intensity exercises.\n\n\nMETHODS AND MEASURES\nSurface electrodes recorded EMG activity of the upper, middle, and lower trapezius and serratus anterior muscles during 10 exercises in 30 healthy subjects.\n\n\nRESULTS\nThe unilateral shoulder shrug exercise was found to produce the greatest EMG activity in the upper trapezius. For the middle trapezius, the greatest EMG amplitudes were generated with 2 exercises: shoulder horizontal extension with external rotation and the overhead arm raise in line with the lower trapezius muscle in the prone position. The arm raise overhead exercise in the prone position produced the maximum EMG activity in the lower trapezius. The serratus anterior was activated maximally with exercises requiring a great amount of upward rotation of the scapula. The exercises were shoulder abduction in the plane of the scapula above 120 degrees and a diagonal exercise with a combination of shoulder flexion, horizontal flexion, and external rotation.\n\n\nCONCLUSION\nThis study identified exercises that maximally activate the trapezius and serratus anterior muscles. This information may be helpful for clinicians in developing exercise programs for these muscles.",
"title": ""
},
{
"docid": "8bc7698e1c8e4ef835f76a7a22128d68",
"text": "The parallel data accesses inherent to large-scale data-intensive scientific computing require that data servers handle very high I/O concurrency. Concurrent requests from different processes or programs to hard disk can cause disk head thrashing between different disk regions, resulting in unacceptably low I/O performance. Current storage systems either rely on the disk scheduler at each data server, or use SSD as storage, to minimize this negative performance effect. However, the ability of the scheduler to alleviate this problem by scheduling requests in memory is limited by concerns such as long disk access times, and potential loss of dirty data with system failure. Meanwhile, SSD is too expensive to be widely used as the major storage device in the HPC environment. We propose iTransformer, a scheme that employs a small SSD to schedule requests for the data on disk. Being less space constrained than with more expensive DRAM, iTransformer can buffer larger amounts of dirty data before writing it back to the disk, or prefetch a larger volume of data in a batch into the SSD. In both cases high disk efficiency can be maintained even for concurrent requests. Furthermore, the scheme allows the scheduling of requests in the background to hide the cost of random disk access behind serving process requests. Finally, as a non-volatile memory, concerns about the quantity of dirty data are obviated. We have implemented iTransformer in the Linux kernel and tested it on a large cluster running PVFS2. Our experiments show that iTransformer can improve the I/O throughput of the cluster by 35% on average for MPI/IO benchmarks of various data access patterns.",
"title": ""
},
{
"docid": "01b1eaf090cf90f14266b1b2d3c6a462",
"text": "Centrality is an important concept in the study of social network analysis (SNA), which is used to measure the importance of a node in a network. While many different centrality measures exist, most of them are proposed and applied to static networks. However, most types of networks are dynamic that their topology changes over time. A popular approach to represent such networks is to construct a sequence of time windows with a single aggregated static graph that aggregates all edges observed over some time period. In this paper, an approach which overcomes the limitation of this representation is proposed based on the notion of the time-ordered graph, to measure the communication centrality of a node in dynamic networks.",
"title": ""
},
{
"docid": "6c64e7ca2e41a6eb70fe39747b706bf8",
"text": "Network Functions Virtualization (NFV) has enabled operators to dynamically place and allocate resources for network services to match workload requirements. However, unbounded end-to-end (e2e) latency of Service Function Chains (SFCs) resulting from distributed Virtualized Network Function (VNF) deployments can severely degrade performance. In particular, SFC instantiations with inter-data center links can incur high e2e latencies and Service Level Agreement (SLA) violations. These latencies can trigger timeouts and protocol errors with latency-sensitive operations.\n Traditional solutions to reduce e2e latency involve physical deployment of service elements in close proximity. These solutions are, however, no longer viable in the NFV era. In this paper, we present our solution that bounds the e2e latency in SFCs and inter-VNF control message exchanges by creating micro-service aggregates based on the affinity between VNFs. Our system, Contain-ed, dynamically creates and manages affinity aggregates using light-weight virtualization technologies like containers, allowing them to be placed in close proximity and hence bounding the e2e latency. We have applied Contain-ed to the Clearwater IP Multimedia System and built a proof-of-concept. Our results demonstrate that, by utilizing application and protocol specific knowledge, affinity aggregates can effectively bound SFC delays and significantly reduce protocol errors and service disruptions.",
"title": ""
},
{
"docid": "ba964bfa07eba81cbc9cdff1dbdac675",
"text": "We present drawing on air, a haptic-aided input technique for drawing controlled 3D curves through space. Drawing on air addresses a control problem with current 3D modeling approaches based on sweeping movement of the hands through the air. Although artists praise the immediacy and intuitiveness of these systems, a lack of control makes it nearly impossible to create 3D forms beyond quick design sketches or gesture drawings. Drawing on air introduces two new strategies for more controlled 3D drawing: one-handed drag drawing and two-handed tape drawing. Both approaches have advantages for drawing certain types of curves. We describe a tangent preserving method for transitioning between the two techniques while drawing. Haptic-aided redrawing and line weight adjustment while drawing are also supported in both approaches. In a quantitative user study evaluation by illustrators, the one and two-handed techniques performed at roughly the same level and both significantly outperformed freehand drawing and freehand drawing augmented with a haptic friction effect. We present the design and results of this experiment, as well as user feedback from artists and 3D models created in a style of line illustration for challenging artistic and scientific subjects.",
"title": ""
},
{
"docid": "c900e3dfacce7a37ce742b95a2bae675",
"text": "Friction stir welding (FSW) is a relatively new joining process that has been used for high production since 1996. Because melting does not occur and joining takes place below the melting temperature of the material, a high-quality weld is created. In this paper working principle and various factor affecting friction stir welding is discussed.",
"title": ""
},
{
"docid": "e769b1eab6d5ebf78bc5d2bb12f05607",
"text": "This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.",
"title": ""
},
{
"docid": "c9d46300b513bca532ec080371511313",
"text": "On a gambling task that models real-life decisions, patients with bilateral lesions of the ventromedial prefrontal cortex (VM) opt for choices that yield high immediate gains in spite of higher future losses. In this study, we addressed three possibilities that may account for this behaviour: (i) hypersensitivity to reward; (ii) insensitivity to punishment; and (iii) insensitivity to future consequences, such that behaviour is always guided by immediate prospects. For this purpose, we designed a variant of the original gambling task in which the advantageous decks yielded high immediate punishment but even higher future reward. The disadvantageous decks yielded low immediate punishment but even lower future reward. We measured the skin conductance responses (SCRs) of subjects after they had received a reward or punishment. Patients with VM lesions opted for the disadvantageous decks in both the original and variant versions of the gambling task. The SCRs of VM lesion patients after they had received a reward or punishment were not significantly different from those of controls. In a second experiment, we investigated whether increasing the delayed punishment in the disadvantageous decks of the original task or decreasing the delayed reward in the disadvantageous decks of the variant task would shift the behaviour of VM lesion patients towards an advantageous strategy. Both manipulations failed to shift the behaviour of VM lesion patients away from the disadvantageous decks. These results suggest that patients with VM lesions are insensitive to future consequences, positive or negative, and are primarily guided by immediate prospects. This 'myopia for the future' in VM lesion patients persists in the face of severe adverse consequences, i.e. rising future punishment or declining future reward.",
"title": ""
},
{
"docid": "aebf72a8a624e0e7fa87f8e7eace9fae",
"text": "A highly-efficient monopulse antenna system is proposed for radar tracking system application. In this study, a novel integrated front-end and back-end complicated three-dimensional (3-D) system is realized practically to achieve high-level of self-compactness. A wideband and compact monopulse comparator network is developed and integrated as the back-end circuit in the system. Performance of the complete monopulse system is verified together with the front-end antenna array. To ensure the system's electrical efficiency and mechanical strength, a 3-D metal-direct-printing technique is utilized to fabricate the complicated structure, avoiding drawbacks from conventional machining methods and assembly processes. Experimental results show the monopulse system can achieve a bandwidth of 12.9% with VSWR less than 1.5 in the Ku-band, and isolation is better than 30 dB. More than 31.5 dBi gain can be maintained in the sum-patterns of wide bandwidth. The amplitude imbalance is less than 0.2 dB and null-depths are lower than -30 dB in the difference-patterns. In particular, with the help of the metal-printing technique, more than 90% efficiency can be retained in the monopulse system. It is a great improvement compared with that obtained from traditional machining approaches, indicating that this technique is promising for realizing high-performance RF intricate systems electrically and mechanically.",
"title": ""
}
] | scidocsrr |
397e1ca66cd9cc314ee3b6182ca6b548 | On Organizational Becoming: Rethinking Organizational Change | [
{
"docid": "efd723e99064699de2ed5400887c1eda",
"text": "Building on a formal theory of the structural aspects of organizational change initiated in Hannan, Pólos, and Carroll (2002a, 2002b), this paper focuses on structural inertia. We define inertia as a persistent organizational resistance to changing architectural features. We examine the evolutionary consequences of architectural inertia. The main theorem holds that selection favors architectural inertia in the sense that the median level of inertia in cohort of organizations presumably increases over time. A second theorem holds that the selection intensity favoring architectural inertia is greater when foresight about the consequences of changes is more limited. According to the prior theory of Hannan, Pólos, and Carroll (2002a, 2002b), foresight is limited by complexity and opacity. Thus it follows that the selection intensity favoring architectural inertia is stronger in populations composed of complex and opaque organizations than in those composed of simple and transparent ones. ∗This research was supported by fellowships from the Netherlands Institute for Advanced Study and by the Stanford Graduate School of Business Trust, ERIM at Erasmus University, and the Centre for Formal Studies in the Social Sciences at Lorand Eötvös University. We benefited from the comments of Jim Baron, Dave Barron, Gábor Péli, Joel Podolny, and the participants in the workshop of the Nagymaros Group on Organizational Ecology and in the Stanford Strategy Conference. †Stanford University ‡Loránd Eötvös University, Budapest and Erasmus University, Rotterdam §Stanford University",
"title": ""
},
{
"docid": "9c5535f218f6228ba6b2a8e5fdf93371",
"text": "Recent analyses of organizational change suggest a growing concern with the tempo of change, understood as the characteristic rate, rhythm, or pattern of work or activity. Episodic change is contrasted with continuous change on the basis of implied metaphors of organizing, analytic frameworks, ideal organizations, intervention theories, and roles for change agents. Episodic change follows the sequence unfreeze-transition-refreeze, whereas continuous change follows the sequence freeze-rebalance-unfreeze. Conceptualizations of inertia are seen to underlie the choice to view change as episodic or continuous.",
"title": ""
}
] | [
{
"docid": "b168f298448b3ba16b7f585caae7baa6",
"text": "Not only how good or bad people feel on average, but also how their feelings fluctuate across time is crucial for psychological health. The last 2 decades have witnessed a surge in research linking various patterns of short-term emotional change to adaptive or maladaptive psychological functioning, often with conflicting results. A meta-analysis was performed to identify consistent relationships between patterns of short-term emotion dynamics-including patterns reflecting emotional variability (measured in terms of within-person standard deviation of emotions across time), emotional instability (measured in terms of the magnitude of consecutive emotional changes), and emotional inertia of emotions over time (measured in terms of autocorrelation)-and relatively stable indicators of psychological well-being or psychopathology. We determined how such relationships are moderated by the type of emotional change, type of psychological well-being or psychopathology involved, valence of the emotion, and methodological factors. A total of 793 effect sizes were identified from 79 articles (N = 11,381) and were subjected to a 3-level meta-analysis. The results confirmed that overall, low psychological well-being co-occurs with more variable (overall ρ̂ = -.178), unstable (overall ρ̂ = -.205), but also more inert (overall ρ̂ = -.151) emotions. These effect sizes were stronger when involving negative compared with positive emotions. Moreover, the results provided evidence for consistency across different types of psychological well-being and psychopathology in their relation with these dynamical patterns, although specificity was also observed. The findings demonstrate that psychological flourishing is characterized by specific patterns of emotional fluctuations across time, and provide insight into what constitutes optimal and suboptimal emotional functioning. (PsycINFO Database Record",
"title": ""
},
{
"docid": "41cdd0e8bcbffbd4c66b8088e26b94fe",
"text": "We propose a neural network for 3D point cloud processing that exploits spherical convolution kernels and octree partitioning of space. The proposed metric-based spherical kernels systematically quantize point neighborhoods to identify local geometric structures in data, while maintaining the properties of translation-invariance and asymmetry. The network architecture itself is guided by octree data structuring that takes full advantage of the sparse nature of irregular point clouds. We specify spherical kernels with the help of neurons in each layer that in turn are associated with spatial locations. We exploit this association to avert dynamic kernel generation during network training, that enables efficient learning with high resolution point clouds. We demonstrate the utility of the spherical convolutional neural network for 3D object classification on standard benchmark datasets.",
"title": ""
},
{
"docid": "917287666755fe4b1832f5b6025414bb",
"text": "The Piver classification of radical hysterectomy for the treatment of cervical cancer is outdated and misused. The Surgery Committee of the Gynecological Cancer Group of the European Organization for Research and Treatment of Cancer (EORTC) produced, approved, and adopted a revised classification. It is hoped that at least within the EORTC participating centers, a standardization of procedures is achieved. The clinical indications of the new classification are discussed.",
"title": ""
},
{
"docid": "ad5a8c3ee37219868d056b341300008e",
"text": "The challenges of 4G are multifaceted. First, 4G requires multiple-input, multiple-output (MIMO) technology, and mobile devices supporting MIMO typically have multiple antennas. To obtain the benefits of MIMO communications systems, antennas typically must be properly configured to take advantage of the independent signal paths that can exist in the communications channel environment. [1] With proper design, one antenna’s radiation is prevented from traveling into the neighboring antenna and being absorbed by the opposite load circuitry. Typically, a combination of antenna separation and polarization is used to achieve the required signal isolation and independence. However, when the area inside devices such as smartphones, USB modems, and tablets is extremely limited, this approach often is not effective in meeting industrial design and performance criteria. Second, new LTE networks are expected to operate alongside all the existing services, such as 3G voice/data, Wi-Fi, Bluetooth, etc. Third, this problem gets even harder in the 700 MHz LTE band because the typical handset is not large enough to properly resonate at that frequency.",
"title": ""
},
{
"docid": "7159d958139d684e4a74abe252788a40",
"text": "Exploration in environments with sparse rewards has been a persistent problem in reinforcement learning (RL). Many tasks are natural to specify with a sparse reward, and manually shaping a reward function can result in suboptimal performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action dimensionality. This puts many real-world tasks out of practical reach of RL methods. In this work, we use demonstrations to overcome the exploration problem and successfully learn to perform long-horizon, multi-step robotics tasks with continuous control such as stacking blocks with a robot arm. Our method, which builds on top of Deep Deterministic Policy Gradients and Hindsight Experience Replay, provides an order of magnitude of speedup over RL on simulated robotics tasks. It is simple to implement and makes only the additional assumption that we can collect a small set of demonstrations. Furthermore, our method is able to solve tasks not solvable by either RL or behavior cloning alone, and often ends up outperforming the demonstrator policy.",
"title": ""
},
{
"docid": "e5edb616b5d0664cf8108127b0f8684c",
"text": "Night vision systems have become an important research area in recent years. Due to variations in weather conditions such as snow, fog, and rain, night images captured by camera may contain high level of noise. These conditions, in real life situations, may vary from no noise to extreme amount of noise corrupting images. Thus, ideal image restoration systems at night must consider various levels of noise and should have a technique to deal with wide range of noisy situations. In this paper, we have presented a new method that works well with different signal to noise ratios ranging from -1.58 dB to 20 dB. For moderate noise, Wigner distribution based algorithm gives good results, whereas for extreme amount of noise 2nd order Wigner distribution is used. The performance of our restoration technique is evaluated using MSE criteria. The results show that our method is capable of dealing with the wide range of Gaussian noise and gives consistent performance throughout.",
"title": ""
},
{
"docid": "d341486002f2b0f5e620f5a63873577c",
"text": "Various Internet solutions take their power processing and analysis from cloud computing services. Internet of Things (IoT) applications started discovering the benefits of computing, processing, and analysis on the device itself aiming to reduce latency for time-critical applications. However, on-device processing is not suitable for resource-constraints IoT devices. Edge computing (EC) came as an alternative solution that tends to move services and computation more closer to consumers, at the edge. In this letter, we study and discuss the applicability of merging deep learning (DL) models, i.e., convolutional neural network (CNN), recurrent neural network (RNN), and reinforcement learning (RL), with IoT and information-centric networking which is a promising future Internet architecture, combined all together with the EC concept. Therefore, a CNN model can be used in the IoT area to exploit reliably data from a complex environment. Moreover, RL and RNN have been recently integrated into IoT, which can be used to take the multi-modality of data in real-time applications into account.",
"title": ""
},
{
"docid": "1e4a74d8d4ae131467e12911fd6ac281",
"text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.",
"title": ""
},
{
"docid": "c0a2fc4ffe5910ffe9a4a9fe983106c3",
"text": "Robust inspection is important to ensure the safety of nuclear power plant components. An automated approach would require detecting often low contrast cracks that could be surrounded by or even within textures with similar appearances such as welding, scratches and grind marks. We propose a crack detection method for nuclear power plant inspection videos by fine tuning a deep neural network for detecting local patches containing cracks which are then grouped in spatial-temporal space for group-level classification. We evaluate the proposed method on a data set consisting of 17 videos consisting of nearly 150,000 frames of inspection video and provide comparison to prior methods.",
"title": ""
},
{
"docid": "0c0d0b6d4697b1a0fc454b995bcda79a",
"text": "Online multiplayer games, such as Gears of War and Halo, use skill-based matchmaking to give players fair and enjoyable matches. They depend on a skill rating system to infer accurate player skills from historical data. TrueSkill is a popular and effective skill rating system, working from only the winner and loser of each game. This paper presents an extension to TrueSkill that incorporates additional information that is readily available in online shooters, such as player experience, membership in a squad, the number of kills a player scored, tendency to quit, and skill in other game modes. This extension, which we call TrueSkill2, is shown to significantly improve the accuracy of skill ratings computed from Halo 5 matches. TrueSkill2 predicts historical match outcomes with 68% accuracy, compared to 52% accuracy for TrueSkill.",
"title": ""
},
{
"docid": "464f7d25cb2a845293a3eb8c427f872f",
"text": "Autism spectrum disorder is the fastest growing developmental disability in the United States. As such, there is an unprecedented need for research examining factors contributing to the health disparities in this population. This research suggests a relationship between the levels of physical activity and health outcomes. In fact, excessive sedentary behavior during early childhood is associated with a number of negative health outcomes. A total of 53 children participated in this study, including typically developing children (mean age = 42.5 ± 10.78 months, n = 19) and children with autism spectrum disorder (mean age = 47.42 ± 12.81 months, n = 34). The t-test results reveal that children with autism spectrum disorder spent significantly less time per day in sedentary behavior when compared to the typically developing group ( t(52) = 4.57, p < 0.001). Furthermore, the results from the general linear model reveal that there is no relationship between motor skills and the levels of physical activity. The ongoing need for objective measurement of physical activity in young children with autism spectrum disorder is of critical importance as it may shed light on an often overlooked need for early community-based interventions to increase physical activity early on in development.",
"title": ""
},
{
"docid": "2c7bafac9d4c4fedc43982bd53c99228",
"text": "One of the uniqueness of business is for firm to be customer focus. Study have shown that this could be achieved through blockchain technology in enhancing customer loyalty programs (Michael J. Casey 2015; John Ream et al 2016; Sean Dennis 2016; James O'Brien and Dave Montali, 2016; Peiguss 2012; Singh, Khan, 2012; and among others). Recent advances in block chain technology have provided the tools for marketing managers to create a new generation of being able to assess the level of control companies want to have over customer data and activities as well as security/privacy issues that always arise with every additional participant of the network While block chain technology is still in the early stages of adoption, it could prove valuable for loyalty rewards program providers. Hundreds of blockchain initiatives are already underway in various industries, particularly airline services, even though standardization is far from a reality. One attractive feature of loyalty rewards is that they are not core to business revenue and operations and companies willing to implement blockchain for customer loyalty programs benefit lower administrative costs, improved customer experiences, and increased user engagement (Michael J. Casey, 2015; James O'Brien and Dave Montali 2016; Peiguss 2012; Singh, Abstract: In today business world, companies have accelerated the use of Blockchain technology to enhance the brand recognition of their products and services. Company believes that the integration of Blockchain into the current business marketing strategy will enhance the growth of their products, and thus acting as a customer loyalty solution. The goal of this study is to obtain a deep understanding of the impact of blockchain technology in enhancing customer loyalty programs of airline business. To achieve the goal of the study, a contextualized and literature based research instrument was used to measure the application of the investigated “constructs”, and a survey was conducted to collect data from the sample population. A convenience sample of total (450) Questionnaires were distributed to customers, and managers of the surveyed airlines who could be reached by the researcher. 274 to airline customers/passengers, and the remaining 176 to managers in the various airlines researched. Questionnaires with instructions were hand-delivered to respondents. Out of the 397 completed questionnaires returned, 359 copies were found usable for the present study, resulting in an effective response rate of 79.7%. The respondents had different social, educational, and occupational backgrounds. The research instrument showed encouraging evidence of reliability and validity. Data were analyzed using descriptive statistics, percentages and ttest analysis. The findings clearly show that there is significant evidence that blockchain technology enhance customer loyalty programs of airline business. It was discovered that Usage of blockchain technology is emphasized by the surveyed airlines operators in Nigeria., the extent of effective usage of customer loyalty programs is related to blockchain technology, and that he level or extent of effective usage of blockchain technology does affect the achievement of customer loyalty program goals and objectives. Feedback from the research will assist to expand knowledge as to the usefulness of blockchain technology being a customer loyalty solution.",
"title": ""
},
{
"docid": "c2ad090abd3f540436d3385bb6f3f013",
"text": "We propose a novel neural network model for joint part-of-speech (POS) tagging and dependency parsing. Our model extends the well-known BIST graph-based dependency parser (Kiperwasser and Goldberg, 2016) by incorporating a BiLSTM-based tagging component to produce automatically predicted POS tags for the parser. On the benchmark English Penn treebank, our model obtains strong UAS and LAS scores at 94.51% and 92.87%, respectively, producing 1.5+% absolute improvements to the BIST graph-based parser, and also obtaining a state-of-the-art POS tagging accuracy at 97.97%. Furthermore, experimental results on parsing 61 “big” Universal Dependencies treebanks from raw texts show that our model outperforms the baseline UDPipe (Straka and Straková, 2017) with 0.8% higher average POS tagging score and 3.6% higher average LAS score. In addition, with our model, we also obtain state-of-the-art downstream task scores for biomedical event extraction and opinion analysis applications. Our code is available together with all pretrained models at: https://github. com/datquocnguyen/jPTDP.",
"title": ""
},
{
"docid": "0e45e57b4e799ebf7e8b55feded7e9e1",
"text": "IMPORTANCE\nIt is increasingly evident that Parkinson disease (PD) is not a single entity but rather a heterogeneous neurodegenerative disorder.\n\n\nOBJECTIVE\nTo evaluate available evidence, based on findings from clinical, imaging, genetic and pathologic studies, supporting the differentiation of PD into subtypes.\n\n\nEVIDENCE REVIEW\nWe performed a systematic review of articles cited in PubMed between 1980 and 2013 using the following search terms: Parkinson disease, parkinsonism, tremor, postural instability and gait difficulty, and Parkinson disease subtypes. The final reference list was generated on the basis of originality and relevance to the broad scope of this review.\n\n\nFINDINGS\nSeveral subtypes, such as tremor-dominant PD and postural instability gait difficulty form of PD, have been found to cluster together. Other subtypes also have been identified, but validation by subtype-specific biomarkers is still lacking.\n\n\nCONCLUSIONS AND RELEVANCE\nSeveral PD subtypes have been identified, but the pathogenic mechanisms underlying the observed clinicopathologic heterogeneity in PD are still not well understood. Further research into subtype-specific diagnostic and prognostic biomarkers may provide insights into mechanisms of neurodegeneration and improve epidemiologic and therapeutic clinical trial designs.",
"title": ""
},
{
"docid": "a90f865e053b9339052a4d00281dbd03",
"text": "Generation of 3D data by deep neural network has been attracting increasing attention in the research community. The majority of extant works resort to regular representations such as volumetric grids or collection of images, however, these representations obscure the natural invariance of 3D shapes under geometric transformations, and also suffer from a number of other issues. In this paper we address the problem of 3D reconstruction from a single image, generating a straight-forward form of output – point cloud coordinates. Along with this problem arises a unique and interesting issue, that the groundtruth shape for an input image may be ambiguous. Driven by this unorthordox output form and the inherent ambiguity in groundtruth, we design architecture, loss function and learning paradigm that are novel and effective. Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image. In experiments not only can our system outperform state-of-the-art methods on single image based 3D reconstruction benchmarks, but it also shows strong performance for 3D shape completion and promising ability in making multiple plausible predictions.",
"title": ""
},
{
"docid": "0cae8939c57ff3713d7321102c80816e",
"text": "In this paper, we propose using 3D Convolutional Neural Networks for large scale user-independent continuous gesture recognition. We have trained an end-to-end deep network for continuous gesture recognition (jointly learning both the feature representation and the classifier). The network performs three-dimensional (i.e. space-time) convolutions to extract features related to both the appearance and motion from volumes of color frames. Space-time invariance of the extracted features is encoded via pooling layers. The earlier stages of the network are partially initialized using the work of Tran et al. before being adapted to the task of gesture recognition. An earlier version of the proposed method, which was trained for 11,250 iterations, was submitted to ChaLearn 2016 Continuous Gesture Recognition Challenge and ranked 2nd with the Mean Jaccard Index Score of 0.269235. When the proposed method was further trained for 28,750 iterations, it achieved state-of-the-art performance on the same dataset, yielding a 0.314779 Mean Jaccard Index Score.",
"title": ""
},
{
"docid": "31fca4faa53520b240267562c9e394fe",
"text": "Purpose – The aim of this study was two-fold: first, to examine the noxious effects of presenteeism on employees’ work well-being in a cross-cultural context involving Chinese and British employees; second, to explore the role of supervisory support as a pan-cultural stress buffer in the presenteeism process. Design/methodology/approach – Using structured questionnaires, the authors compared data collected from samples of 245 Chinese and 128 British employees working in various organizations and industries. Findings – Cross-cultural comparison revealed that the act of presenteeism was more prevalent among Chinese and they reported higher levels of strains than their British counterparts. Hierarchical regression analyses showed that presenteeism had noxious effects on exhaustion for both Chinese and British employees. Moreover, supervisory support buffered the negative impact of presenteeism on exhaustion for both Chinese and British employees. Specifically, the negative relation between presenteeism and exhaustion was stronger for those with more supervisory support. Practical implications – Presenteeism may be used as a career-protecting or career-promoting tactic. However, the negative effects of this behavior on employees’ work well-being across the culture divide should alert us to re-think its pros and cons as a career behavior. Employees in certain cultures (e.g. the hardworking Chinese) may exhibit more presenteeism behaviour, thus are in greater risk of ill-health. Originality/value – This is the first cross-cultural study demonstrating the universality of the act of presenteeism and its damaging effects on employees’ well-being. The authors’ findings of the buffering role of supervisory support across cultural contexts highlight the necessity to incorporate resources in mitigating the harmful impact of presenteeism.",
"title": ""
},
{
"docid": "461062a51b0c33fcbb0f47529f3a6fba",
"text": "Release of ATP from astrocytes is required for Ca2+ wave propagation among astrocytes and for feedback modulation of synaptic functions. However, the mechanism of ATP release and the source of ATP in astrocytes are still not known. Here we show that incubation of astrocytes with FM dyes leads to selective labelling of lysosomes. Time-lapse confocal imaging of FM dye-labelled fluorescent puncta, together with extracellular quenching and total-internal-reflection fluorescence microscopy (TIRFM), demonstrated directly that extracellular ATP or glutamate induced partial exocytosis of lysosomes, whereas an ischaemic insult with potassium cyanide induced both partial and full exocytosis of these organelles. We found that lysosomes contain abundant ATP, which could be released in a stimulus-dependent manner. Selective lysis of lysosomes abolished both ATP release and Ca2+ wave propagation among astrocytes, implicating physiological and pathological functions of regulated lysosome exocytosis in these cells.",
"title": ""
},
{
"docid": "3c8e85a977df74c2fd345db9934d4699",
"text": "The abstract paragraph should be indented 1/2 inch (3 picas) on both left and righthand margins. Use 10 point type, with a vertical spacing of 11 points. The word Abstract must be centered, bold, and in point size 12. Two line spaces precede the abstract. The abstract must be limited to one paragraph.",
"title": ""
}
] | scidocsrr |
dbddb71ba5b69885d3284474a7414188 | The in fl uence of social media interactions on consumer – brand relationships : A three-country study of brand perceptions and marketing behaviors | [
{
"docid": "0ee70b75cdcf22b8a22a1810227d401f",
"text": "Traditionally, consumers used the Internet to simply expend content: they read it, they watched it, and they used it to buy products and services. Increasingly, however, consumers are utilizing platforms–—such as content sharing sites, blogs, social networking, and wikis–—to create, modify, share, and discuss Internet content. This represents the social media phenomenon, which can now significantly impact a firm’s reputation, sales, and even survival. Yet, many executives eschew or ignore this form of media because they don’t understand what it is, the various forms it can take, and how to engage with it and learn. In response, we present a framework that defines social media by using seven functional building blocks: identity, conversations, sharing, presence, relationships, reputation, and groups. As different social media activities are defined by the extent to which they focus on some or all of these blocks, we explain the implications that each block can have for how firms should engage with social media. To conclude, we present a number of recommendations regarding how firms should develop strategies for monitoring, understanding, and responding to different social media activities. final version published in Business Horizons (2011) v. 54 pp. 241-251. doi: 10.106/j.bushor.2011.01.005 1. Welcome to the jungle: The social media ecology Social media employ mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, co-",
"title": ""
},
{
"docid": "bf1ba6901d6c64a341ba1491c6c2c3c9",
"text": "The present research proposes schema congruity as a theoretical basis for examining the effectiveness and consequences of product anthropomorphism. Results of two studies suggest that the ability of consumers to anthropomorphize a product and their consequent evaluation of that product depend on the extent to which that product is endowed with characteristics congruent with the proposed human schema. Furthermore, consumers’ perception of the product as human mediates the influence of feature type on product evaluation. Results of a third study, however, show that the affective tag attached to the specific human schema moderates the evaluation but not the successful anthropomorphizing of theproduct.",
"title": ""
},
{
"docid": "e6034310ee28d8ed4cbd1ea4c71cd76b",
"text": "This study emphasizes the need for standardized measurement tools for human robot interaction (HRI). If we are to make progress in this field then we must be able to compare the results from different studies. A literature review has been performed on the measurements of five key concepts in HRI: anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety. The results have been distilled into five consistent questionnaires using semantic differential scales. We report reliability and validity indicators based on several empirical studies that used these questionnaires. It is our hope that these questionnaires can be used by robot developers to monitor their progress. Psychologists are invited to further develop the questionnaires by adding new concepts, and to conduct further validations where it appears necessary. C. Bartneck ( ) Department of Industrial Design, Eindhoven University of Technology, Den Dolech 2, 5600 Eindhoven, The Netherlands e-mail: [email protected] D. Kulić Nakamura & Yamane Lab, Department of Mechano-Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan e-mail: [email protected] E. Croft · S. Zoghbi Department of Mechanical Engineering, University of British Columbia, 6250 Applied Science Lane, Room 2054, Vancouver, V6T 1Z4, Canada E. Croft e-mail: [email protected] S. Zoghbi e-mail: [email protected]",
"title": ""
},
{
"docid": "6a27457b4d8efea03475f4d276a704c9",
"text": "Why are certain pieces of online content more viral than others? This article takes a psychological approach to understanding diffusion. Using a unique dataset of all the New York Times articles published over a three month period, the authors examine how emotion shapes virality. Results indicate that positive content is more viral than negative content, but that the relationship between emotion and social transmission is more complex than valence alone. Virality is driven, in part, by physiological arousal. Content that evokes high-arousal positive (awe) or negative (anger or anxiety) emotions is more viral. Content that evokes low arousal, or deactivating emotions (e.g., sadness) is less viral. These results hold even controlling for how surprising, interesting, or practically useful content is (all of which are positively linked to virality), as well as external drivers of attention (e.g., how prominently content was featured). Experimental results further demonstrate the causal impact of specific emotion on transmission, and illustrate that it is driven by the level of activation induced. Taken together, these findings shed light on why people share content and provide insight into designing effective viral marketing",
"title": ""
},
{
"docid": "3711e4c4feec68299f3f94858e7611f8",
"text": "There is an ongoing debate over the activities of brands and companies in social media. Some researchers believe social media provide a unique opportunity for brands to foster their relationships with customers, while others believe the contrary. Taking the perspective of the brand community building plus the brand trust and loyalty literatures, our goal is to show how brand communities based on social media influence elements of the customer centric model (i.e., the relationships among focal customer and brand, product, company, and other customers) and brand loyalty. A survey-based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on customer/product, customer/brand, customer/company and customer/other customers relationships, which in turn have positive effects on brand trust, and trust has positive effects on brand loyalty. We find that brand trust has a fully mediating role in converting the effects of enhanced relationships in brand community to brand loyalty. The implications for marketing practice and future research are discussed. © 2012 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "80ed0585f1b040f2af895f1067502899",
"text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.",
"title": ""
},
{
"docid": "759140ad09a5a8ce5c5e1ca78e238de1",
"text": "Various issues make framework development harder than regular development. Building product lines and frameworks requires increased coordination and communication between stakeholders and across the organization.\n The difficulty of building the right abstractions ranges from understanding the domain models, selecting and evaluating the framework architecture, to designing the right interfaces, and adds to the complexity of a framework project.",
"title": ""
},
{
"docid": "743aeaa668ba32e6561e9e62015e24cd",
"text": "A smart city enables the effective utilization of resources and better quality of services to the citizens. To provide services such as air quality management, weather monitoring and automation of homes and buildings in a smart city, the basic parameters are temperature, humidity and CO2. This paper presents a customised design of an Internet of Things (IoT) enabled environment monitoring system to monitor temperature, humidity and CO2. In developed system, data is sent from the transmitter node to the receiver node. The data received at the receiver node is monitored and recorded in an excel sheet in a personal computer (PC) through a Graphical User Interface (GUI), made in LabVIEW. An Android application has also been developed through which data is transferred from LabVIEW to a smartphone, for monitoring data remotely. The results and the performance of the proposed system is discussed.",
"title": ""
},
{
"docid": "06ef397d13383ff09f2f6741c0626192",
"text": "A fully-integrated low-dropout regulator (LDO) with fast transient response and full spectrum power supply rejection (PSR) is proposed to provide a clean supply for noise-sensitive building blocks in wideband communication systems. With the proposed point-of-load LDO, chip-level high-frequency glitches are well attenuated, consequently the system performance is improved. A tri-loop LDO architecture is proposed and verified in a 65 nm CMOS process. In comparison to other fully-integrated designs, the output pole is set to be the dominant pole, and the internal poles are pushed to higher frequencies with only 50 μA of total quiescent current. For a 1.2 V input voltage and 1 V output voltage, the measured undershoot and overshoot is only 43 mV and 82 mV, respectively, for load transient of 0 μA to 10 mA within edge times of 200 ps. It achieves a transient response time of 1.15 ns and the figure-of-merit (FOM) of 5.74 ps. PSR is measured to be better than -12 dB over the whole spectrum (DC to 20 GHz tested). The prototype chip measures 260×90 μm2, including 140 pF of stacked on-chip capacitors.",
"title": ""
},
{
"docid": "d2c4f17c9bb6ec2112fe39e95dfed94e",
"text": "B loyalty and the more modern topics of computing customer lifetime value and structuring loyalty programs remain the focal point for a remarkable number of research articles. At first, this research appears consistent with firm practices. However, close scrutiny reveals disaffirming evidence. Many current so-called loyalty programs appear unrelated to the cultivation of customer brand loyalty and the creation of customer assets. True investments are up-front expenditures that produce much greater future returns. In contrast, many socalled loyalty programs are shams because they produce liabilities (e.g., promises of future rewards or deferred rebates) rather than assets. These programs produce short-term revenue from customers while producing substantial future obligations to those customers. Rather than showing trust by committing to the customer, the firm asks the customer to trust the firm—that is, trust that future rewards are indeed forthcoming. The entire idea is antithetical to the concept of a customer asset. Many modern loyalty programs resemble old-fashioned trading stamps or deferred rebates that promise future benefits for current patronage. A true loyalty program invests in the customer (e.g., provides free up-front training, allows familiarization or customization) with the expectation of greater future revenue. Alternative motives for extant programs are discussed.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "6300234fd4ed55285459b8561b5c0ed0",
"text": "In conventional power system operation, droop control methods are used to facilitate load sharing among different generation sources. This method compensates for both active and reactive power imbalances by adjusting the output voltage magnitude and frequency of the generating unit. Both P-ω and Q-V droops have been used in synchronous machines for decades. Similar droop controllers were used in this study to develop a control algorithm for a three-phase isolated (islanded) inverter. Controllers modeled in a synchronous dq reference frame were simulated in PLECS and validated with the hardware setup. A small-signal model based on an averaged model of the inverter was developed to study the system's dynamics. The accuracy of this mathematical model was then verified using the data obtained from the experimental and simulation results. This validated model is a useful tool for the further dynamic analysis of a microgrid.",
"title": ""
},
{
"docid": "066b4130dbc9c36d244e5da88936dfc4",
"text": "Real-time strategy (RTS) games have drawn great attention in the AI research community, for they offer a challenging and rich testbed for both machine learning and AI techniques. Due to their enormous state spaces and possible map configurations, learning good and generalizable representations for machine learning is crucial to build agents that can perform well in complex RTS games. In this paper we present a convolutional neural network approach to learn an evaluation function that focuses on learning general features that are independent of the map configuration or size. We first train and evaluate the network on a winner prediction task on a dataset collected with a small set of maps with a fixed size. Then we evaluate the network’s generalizability to three set of larger maps. by using it as an evaluation function in the context of Monte Carlo Tree Search. Our results show that the presented architecture can successfully capture general and map-independent features applicable to more complex RTS situations.",
"title": ""
},
{
"docid": "5739713d17ec5cc6952832644b2a1386",
"text": "Group Support Systems (GSS) can improve the productivity of Group Work by offering a variety of tools to assist a virtual group across geographical distances. Experience shows that the value of a GSS depends on how purposefully and skillfully it is used. We present a framework for a universal GSS based on a thinkLet- and thinXel-based Group Process Modeling Language (GPML). Our framework approach uses the GPML to describe different kinds of group processes in an unambiguous and compact representation and to guide the participants automatically through these processes. We assume that a GSS based on this GPML can provide the following advantages: to support the user by designing and executing a collaboration process and to increase the applicability of GSSs for different kinds of group processes. We will present a prototype and use different kinds of group processes to illustrate the application of a GPML for a universal GSS.",
"title": ""
},
{
"docid": "fd8ac9c61b2146a27465e96b4f0eb5f6",
"text": "In this paper performance of LQR and ANFIS control for a Double Inverted Pendulum system is compared. The double inverted pendulum system is highly unstable and nonlinear. Mathematical model is presented by linearizing the system about its vertical position. The analysis of the system is performed for its stability, controllability and observability. Furthermore, the LQR controller and ANFIS controller based on the state variable fusion is proposed for the control of the double inverted pendulum system and simulation results show that ANFIS controller has better tracking performance and disturbance rejecting performance as compared to LQR controller.",
"title": ""
},
{
"docid": "2f01e912a6fbafca1e791ef18fb51ceb",
"text": "Visualizing the result of users' opinion mining on twitter using social network graph can play a crucial role in decision-making. Available data visualizing tools, such as NodeXL, use a specific file format as an input to construct and visualize the social network graph. One of the main components of the input file is the sentimental score of the users' opinion. This motivates us to develop a free and open source system that can take the opinion of users in raw text format and produce easy-to-interpret visualization of opinion mining and sentiment analysis result on a social network. We use a public machine learning library called LingPipe Library to classify the sentiments of users' opinion into positive, negative and neutral classes. Our proposed system can be used to analyze and visualize users' opinion on the network level to determine sub-social structures (sub-groups). Moreover, the proposed system can also identify influential people in the social network by using node level metrics such as betweenness centrality. In addition to the network level and node level analysis, our proposed method also provides an efficient filtering mechanism by either time and date, or the sentiment score. We tested our proposed system using user opinions about different Samsung products and related issues that are collected from five official twitter accounts of Samsung Company. The test results show that our proposed system will be helpful to analyze and visualize the opinion of users at both network level and node level.",
"title": ""
},
{
"docid": "b8700283c7fb65ba2e814adffdbd84f8",
"text": "Human immunoglobulin preparations for intravenous or subcutaneous administration are the cornerstone of treatment in patients with primary immunodeficiency diseases affecting the humoral immune system. Intravenous preparations have a number of important uses in the treatment of other diseases in humans as well, some for which acceptable treatment alternatives do not exist. We provide an update of the evidence-based guideline on immunoglobulin therapy, last published in 2006. Given the potential risks and inherent scarcity of human immunoglobulin, careful consideration of its indications and administration is warranted.",
"title": ""
},
{
"docid": "f74acc86ecbd8aa9678fbcb13559ae01",
"text": "Strawberry and kiwi leathers were used to develop a new healthy and preservative-free fruit snack for new markets. Fruit puree was dehydrated at 60 °C for 20 h and subjected to accelerated storage. Soluble solids, titratable acidity, pH, water activity (aw ), total phenolic (TP), antioxidant activity (AOA) and capacity (ORAC), and color change (browning index) were measured in leathers, cooked, and fresh purees. An untrained panel was used to evaluate consumer acceptability. Soluble solids of fresh purees were 11.24 to 13.04 °Brix, whereas pH was 3.46 to 3.39. Leathers presented an aw of 0.59 to 0.67, and a moisture content of 21 kg water/100 kg. BI decreased in both leathers over accelerated storage period. TP and AOA were higher (P ≤ 0.05) in strawberry formulations. ORAC decreased 57% in strawberry and 65% in kiwi leathers when compared to fruit puree. TP and AOA increased in strawberries during storage. Strawberry and Kiwi leathers may be a feasible new, natural, high antioxidant, and healthy snack for the Chilean and other world markets, such as Europe, particularly the strawberry leather, which was preferred by untrained panelists.",
"title": ""
},
{
"docid": "6ff51eea5a590996ed0219a4991d32f2",
"text": "The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set ℛ ( 3 , 3 , 3 ; 13 ) $\\mathcal {R}(3,3,3;13)$ consisting of 78,892 Ramsey colorings.",
"title": ""
},
{
"docid": "9be50791156572e6e1a579952073d810",
"text": "A synthetic aperture radar (SAR) raw data simulator is an important tool for testing the system parameters and the imaging algorithms. In this paper, a scene raw data simulator based on an inverse ω-k algorithm for bistatic SAR of a translational invariant case is proposed. The differences between simulations of monostatic and bistatic SAR are also described. The algorithm proposed has high precision and can be used in long-baseline configuration and for single-pass interferometry. Implementation details are described, and plenty of simulation results are provided to validate the algorithm.",
"title": ""
},
{
"docid": "fcf01af44da0c796cdaf02c8e05a0fd3",
"text": "As a promising paradigm to reduce both capital and operating expenditures, the cloud radio access network (C-RAN) has been shown to provide high spectral efficiency and energy efficiency. Motivated by its significant theoretical performance gains and potential advantages, C-RANs have been advocated by both the industry and research community. This paper comprehensively surveys the recent advances of C-RANs, including system architectures, key techniques, and open issues. The system architectures with different functional splits and the corresponding characteristics are comprehensively summarized and discussed. The state-of-the-art key techniques in C-RANs are classified as: the fronthaul compression, large-scale collaborative processing, and channel estimation in the physical layer; and the radio resource allocation and optimization in the upper layer. Additionally, given the extensiveness of the research area, open issues, and challenges are presented to spur future investigations, in which the involvement of edge cache, big data mining, socialaware device-to-device, cognitive radio, software defined network, and physical layer security for C-RANs are discussed, and the progress of testbed development and trial test is introduced as well.",
"title": ""
},
{
"docid": "af105dd5dca0642d119ca20661d5f633",
"text": "This paper derives the forward and inverse kinematics of a humanoid robot. The specific humanoid that the derivation is for is a robot with 27 degrees of freedom but the procedure can be easily applied to other similar humanoid platforms. First, the forward and inverse kinematics are derived for the arms and legs. Then, the kinematics for the torso and the head are solved. Finally, the forward and inverse kinematic solutions for the whole body are derived using the kinematics of arms, legs, torso, and head.",
"title": ""
},
{
"docid": "e682f1b64d6eae69252ea2298f035ac6",
"text": "Objective\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMaterials and Methods\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nResults\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nConclusion\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering.",
"title": ""
},
{
"docid": "5404f89c379ffc79de345414baf1e084",
"text": "OBJECTIVES\nTo describe pelvic organ prolapse surgical success rates using a variety of definitions with differing requirements for anatomic, symptomatic, or re-treatment outcomes.\n\n\nMETHODS\nEighteen different surgical success definitions were evaluated in participants who underwent abdominal sacrocolpopexy within the Colpopexy and Urinary Reduction Efforts trial. The participants' assessments of overall improvement and rating of treatment success were compared between surgical success and failure for each of the definitions studied. The Wilcoxon rank sum test was used to identify significant differences in outcomes between success and failure.\n\n\nRESULTS\nTreatment success varied widely depending on definition used (19.2-97.2%). Approximately 71% of the participants considered their surgery \"very successful,\" and 85.2% considered themselves \"much better\" than before surgery. Definitions of success requiring all anatomic support to be proximal to the hymen had the lowest treatment success (19.2-57.6%). Approximately 94% achieved surgical success when it was defined as the absence of prolapse beyond the hymen. Subjective cure (absence of bulge symptoms) occurred in 92.1% while absence of re-treatment occurred in 97.2% of participants. Subjective cure was associated with significant improvements in the patient's assessment of both treatment success and overall improvement, more so than any other definition considered (P<.001 and <.001, respectively). Similarly, the greatest difference in symptom burden and health-related quality of life as measured by the Pelvic Organ Prolapse Distress Inventory and Pelvic Organ Prolapse Impact Questionnaire scores between treatment successes and failures was noted when success was defined as subjective cure (P<.001).\n\n\nCONCLUSION\nThe definition of success substantially affects treatment success rates after pelvic organ prolapse surgery. The absence of vaginal bulge symptoms postoperatively has a significant relationship with a patient's assessment of overall improvement, while anatomic success alone does not.\n\n\nLEVEL OF EVIDENCE\nII.",
"title": ""
},
{
"docid": "328052245c3a5144c492e761e7f51bae",
"text": "The screening of novel materials with good performance and the modelling of quantitative structureactivity relationships (QSARs), among other issues, are hot topics in the field of materials science. Traditional experiments and computational modelling often consume tremendous time and resources and are limited by their experimental conditions and theoretical foundations. Thus, it is imperative to develop a new method of accelerating the discovery and design process for novel materials. Recently, materials discovery and design using machine learning have been receiving increasing attention and have achieved great improvements in both time efficiency and prediction accuracy. In this review, we first outline the typical mode of and basic procedures for applying machine learning in materials science, and we classify and compare the main algorithms. Then, the current research status is reviewed with regard to applications of machine learning in material property prediction, in new materials discovery and for other purposes. Finally, we discuss problems related to machine learning in materials science, propose possible solutions, and forecast potential directions of future research. By directly combining computational studies with experiments, we hope to provide insight into the parameters that affect the properties of materials, thereby enabling more efficient and target-oriented research on materials dis-",
"title": ""
}
] | scidocsrr |
93c558a7adca8ac67221fda4bf4d8a89 | Common Elements Wideband MIMO Antenna System for WiFi/LTE Access-Point Applications | [
{
"docid": "38f6aaf5844ddb6e4ed0665559b7f813",
"text": "A novel dual-broadband multiple-input-multiple-output (MIMO) antenna system is developed. The MIMO antenna system consists of two dual-broadband antenna elements, each of which comprises two opened loops: an outer loop and an inner loop. The opened outer loop acts as a half-wave dipole and is excited by electromagnetic coupling from the inner loop, leading to a broadband performance for the lower band. The opened inner loop serves as two monopoles. A combination of the two monopoles and the higher modes from the outer loop results in a broadband performance for the upper band. The bandwidths (return loss >;10 dB) achieved for the dual-broadband antenna element are 1.5-2.8 GHz (~ 60%) for the lower band and 4.7-8.5 GHz (~ 58\\%) for the upper band. Two U-shaped slots are introduced to reduce the coupling between the two dual-broadband antenna elements. The isolation achieved is higher than 15 dB in the lower band and 20 dB in the upper band, leading to an envelope correlation coefficient of less than 0.01. The dual-broadband MIMO antenna system has a compact volume of 50×17×0.8 mm3, suitable for GSM/UMTS/LTE and WLAN communication handsets.",
"title": ""
},
{
"docid": "ecfd9b38cc68c4af9addb4915424d6d0",
"text": "The conditions for antenna diversity action are investigated. In terms of the fields, a condition is shown to be that the incident field and the far field of the diversity antenna should obey (or nearly obey) an orthogonality relationship. The role of mutual coupling is central, and it is different from that in a conventional array antenna. In terms of antenna parameters, a sufficient condition for diversity action for a certain class of high gain antennas at the mobile, which approximates most practical mobile antennas, is shown to be zero (or low) mutual resistance between elements. This is not the case at the base station, where the condition is necessary only. The mutual resistance condition offers a powerful design tool, and examples of new mobile diversity antennas are discussed along with some existing designs.",
"title": ""
}
] | [
{
"docid": "b56d144f1cda6378367ea21e9c76a39e",
"text": "The main objective of our work has been to develop and then propose a new and unique methodology useful in developing the various features of heart rate variability (HRV) and carotid arterial wall thickness helpful in diagnosing cardiovascular disease. We also propose a suitable prediction model to enhance the reliability of medical examinations and treatments for cardiovascular disease. We analyzed HRV for three recumbent postures. The interaction effects between the recumbent postures and groups of normal people and heart patients were observed based on HRV indexes. We also measured intima-media of carotid arteries and used measurements of arterial wall thickness as other features. Patients underwent carotid artery scanning using high-resolution ultrasound devised in a previous study. In order to extract various features, we tested six classification methods. As a result, CPAR and SVM (gave about 85%-90% goodness of fit) outperforming the other classifiers.",
"title": ""
},
{
"docid": "c4df4e0f9a77328ed5c81c124dbe643b",
"text": "In this paper, the bridgeless interleaved boost topology is proposed for plug-in hybrid electric vehicle and electric vehicle battery chargers to achieve high efficiency, which is critical to minimize the charger size, charging time and the amount and cost of electricity drawn from the utility. An analytical model for this topology is developed, enabling the calculation of power losses and efficiency. Experimental and simulation results of prototype units converting the universal AC input voltage to 400 V DC at 3.4 kW are given to verify the proof of concept, and analytical work reported in this paper.",
"title": ""
},
{
"docid": "4a5cfc32cccc96c49739cc49f311ddb4",
"text": "We present an approach for creating realistic synthetic views of existing architectural scenes from a sparse set of still photographs. Our approach, which combines both geometrybased and image-based modeling and rendering techniques, has two components. The rst component is an easy-to-use photogrammetric modeling system which facilitates the recovery of a basic geometric model of the photographed scene. The modeling system is e ective and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo approach can robustly recover accurate depth from image pairs with large baselines. Consequently, our approach can model large architectural environments with far fewer photographs than current imagebased modeling approaches. As an intermediate result, we present view-dependent texture mapping, a method of better simulating geometric detail on basic models. Our approach can recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach's abilty to create realistic renderings of architectural scenes from viewpoints far from the original photographs.",
"title": ""
},
{
"docid": "1c075aac5462cf6c6251d6c9c1a679c0",
"text": "Why You Can’t Find a Taxi in the Rain and Other Labor Supply Lessons from Cab Drivers In a seminal paper, Camerer, Babcock, Loewenstein, and Thaler (1997) find that the wage elasticity of daily hours of work New York City (NYC) taxi drivers is negative and conclude that their labor supply behavior is consistent with target earning (having reference dependent preferences). I replicate and extend the CBLT analysis using data from all trips taken in all taxi cabs in NYC for the five years from 2009-2013. Using the model of expectations-based reference points of Koszegi and Rabin (2006), I distinguish between anticipated and unanticipated daily wage variation and present evidence that only a small fraction of wage variation (about 1/8) is unanticipated so that reference dependence (which is relevant only in response to unanticipated variation) can, at best, play a limited role in determining labor supply. The overall pattern in my data is clear: drivers tend to respond positively to unanticipated as well as anticipated increases in earnings opportunities. This is consistent with the neoclassical optimizing model of labor supply and does not support the reference dependent preferences model. I explore heterogeneity across drivers in their labor supply elasticities and consider whether new drivers differ from more experienced drivers in their behavior. I find substantial heterogeneity across drivers in their elasticities, but the estimated elasticities are generally positive and only rarely substantially negative. I also find that new drivers with smaller elasticities are more likely to exit the industry while drivers who remain learn quickly to be better optimizers (have positive labor supply elasticities that grow with experience). JEL Classification: J22, D01, D03",
"title": ""
},
{
"docid": "7b02c36cef0c195d755b6cc1c7fbda2e",
"text": "Content based object retrieval across large scale surveillance video dataset is a significant and challenging task, in which learning an effective compact object descriptor plays a critical role. In this paper, we propose an efficient deep compact descriptor with bagging auto-encoders. Specifically, we take advantage of discriminative CNN to extract efficient deep features, which not only involve rich semantic information but also can filter background noise. Besides, to boost the retrieval speed, auto-encoders are used to map the high-dimensional real-valued CNN features into short binary codes. Considering the instability of auto-encoder, we adopt a bagging strategy to fuse multiple auto-encoders to reduce the generalization error, thus further improving the retrieval accuracy. In addition, bagging is easy for parallel computing, so retrieval efficiency can be guaranteed. Retrieval experimental results on the dataset of 100k visual objects extracted from multi-camera surveillance videos demonstrate the effectiveness of the proposed deep compact descriptor.",
"title": ""
},
{
"docid": "574282b45a87abf6e8478886c0400244",
"text": "A mobile wireless sensor network owes its name to the presence of mobile sink or sensor nodes within the network. The advantages of mobile WSN over static WSN are better energy efficiency, improved coverage, enhanced target tracking and superior channel capacity. In this paper we present and discuss hierarchical multi-tiered architecture for mobile wireless sensor network. This architecture is proposed for the future pervasive computing age. We also elaborate on the impact of mobility on different performance metrics in mobile WSN. A study of some of the possible application scenarios for pervasive computing involving mobile WSN is also presented. These application scenarios will be discussed in their implementation context. While discussing the possible applications, we also study related technologies that appear promising to be integrated with mobile WSN in the ubiquitous computing. With an enormous growth in number of cellular subscribers, we therefore place the mobile phone as the key element in future ubiquitous wireless networks. With the powerful computing, communicating and storage capacities of these mobile devices, the network performance can benefit from the architecture in terms of scalability, energy efficiency and packet delay, etc.",
"title": ""
},
{
"docid": "aaba5dc8efc9b6a62255139965b6f98d",
"text": "The interaction of an autonomous mobile robot with the real world critically depends on the robots morphology and on its environment. Building a model of these aspects is extremely complex, making simulation insu cient for accurate validation of control algorithms. If simulation environments are often very e cient, the tools for experimenting with real robots are often inadequate. The traditional programming languages and tools seldom provide enought support for realtime experiments, thus hindering the understanding of the control algorithms and making the experimentation complex and time-consuming. A miniature robot is presented: it has a cylindrical shape measuring 55 mm in diameter and 30 mm in height. Due to its small size, experiments can be performed quickly and cost-e ectively in a small working area. Small peripherals can be designed and connected to the basic module and can take advantage of a versatile communication scheme. A serial-link is provided to run control algorithms on a workstation during debugging, thereby giving the user the opportunity of employing all available graphical tools. Once debugged, the algorithm can be downloaded to the robot and run on its own processor. Experimentation with groups of robots is hardly possible with commercially available hardware. The size and the price of the described robot open the way to cost-e ective investigations into collective behaviour. This aspect of research drives the design of the robot described in this paper. Experiments with some twenty units are planned for the near future.",
"title": ""
},
{
"docid": "65a8c1faa262cd428045854ffcae3fae",
"text": "Extracting named entities in text and linking extracted names to a given knowledge base are fundamental tasks in applications for text understanding. Existing systems typically run a named entity recognition (NER) model to extract entity names first, then run an entity linking model to link extracted names to a knowledge base. NER and linking models are usually trained separately, and the mutual dependency between the two tasks is ignored. We propose JERL, Joint Entity Recognition and Linking, to jointly model NER and linking tasks and capture the mutual dependency between them. It allows the information from each task to improve the performance of the other. To the best of our knowledge, JERL is the first model to jointly optimize NER and linking tasks together completely. In experiments on the CoNLL’03/AIDA data set, JERL outperforms state-of-art NER and linking systems, and we find improvements of 0.4% absolute F1 for NER on CoNLL’03, and 0.36% absolute precision@1 for linking on AIDA.",
"title": ""
},
{
"docid": "9f87424062c624bc417f848cc2f33bf3",
"text": "The sentiment mining is a fast growing topic of both academic research and commercial applications, especially with the widespread of short-text applications on the Web. A fundamental problem that confronts sentiment mining is the automatics and correctness of mined sentiment. This paper proposes an DLDA (Double Latent Dirichlet Allocation) model to analyze sentiment for short-texts based on topic model. Central to DLDA is to add sentiment to topic model and consider sentiment as equal to topic, but independent of topic. DLDA is actually two methods DLDA I and its improvement DLDA II. Compared to the single topic-word LDA, the double LDA I, i.e., DLDA I designs another sentiment-word LDA. Both LDAs are independent of each other, but they combine to influence the selected words in short-texts. DLDA II is an improvement of DLDA I. It employs entropy formula to assign weights of words in the Gibbs sampling based on the ideas that words with stronger sentiment orientation should be assigned with higher weights. Experiments show that compared with other traditional topic methods, both DLDA I and II can achieve higher accuracy with less manual needs.",
"title": ""
},
{
"docid": "815215b56160ab38745fded16edd31d6",
"text": "Object detection in videos has drawn increasing attention recently with the introduction of the large-scale ImageNet VID dataset. Different from object detection in static images, temporal information in videos is vital for object detection. To fully utilize temporal information, state-of-the-art methods [15, 14] are based on spatiotemporal tubelets, which are essentially sequences of associated bounding boxes across time. However, the existing methods have major limitations in generating tubelets in terms of quality and efficiency. Motion-based [14] methods are able to obtain dense tubelets efficiently, but the lengths are generally only several frames, which is not optimal for incorporating long-term temporal information. Appearance-based [15] methods, usually involving generic object tracking, could generate long tubelets, but are usually computationally expensive. In this work, we propose a framework for object detection in videos, which consists of a novel tubelet proposal network to efficiently generate spatiotemporal proposals, and a Long Short-term Memory (LSTM) network that incorporates temporal information from tubelet proposals for achieving high object detection accuracy in videos. Experiments on the large-scale ImageNet VID dataset demonstrate the effectiveness of the proposed framework for object detection in videos.",
"title": ""
},
{
"docid": "72c054c955a34fbac8e798665ece8f57",
"text": "In this paper, we propose and empirically validate a suite of hotspot patterns: recurring architecture problems that occur in most complex systems and incur high maintenance costs. In particular, we introduce two novel hotspot patterns, Unstable Interface and Implicit Cross-module Dependency. These patterns are defined based on Baldwin and Clark's design rule theory, and detected by the combination of history and architecture information. Through our tool-supported evaluations, we show that these patterns not only identify the most error-prone and change-prone files, they also pinpoint specific architecture problems that may be the root causes of bug-proneness and change-proneness. Significantly, we show that 1) these structure-history integrated patterns contribute more to error- and change-proneness than other hotspot patterns, and 2) the more hotspot patterns a file is involved in, the more error- and change-prone it is. Finally, we report on an industrial case study to demonstrate the practicality of these hotspot patterns. The architect and developers confirmed that our hotspot detector discovered the majority of the architecture problems causing maintenance pain, and they have started to improve the system's maintainability by refactoring and fixing the identified architecture issues.",
"title": ""
},
{
"docid": "1cf73c4949ad0c610e90a172b02803e4",
"text": "BACKGROUND\nTo date the manner in which information reaches the nucleus on that part within the three-dimensional structure where specific restorative processes of structural components of the cell are required is unknown. The soluble signalling molecules generated in the course of destructive and restorative processes communicate only as needed.\n\n\nHYPOTHESIS\nAll molecules show temperature-dependent molecular vibration creating a radiation in the infrared region. Each molecule species has in its turn a specific frequency pattern under given specific conditions. Changes in their structural composition result in modified frequency patterns of the molecules in question. The main structural elements of the cell membrane, of the endoplasmic reticulum, of the Golgi apparatus, and of the different microsomes representing the great variety of polar lipids show characteristic frequency patterns with peaks in the region characterised by low water absorption. These structural elements are very dynamic, mainly caused by the creation of signal molecules and transport containers. By means of the characteristic radiation, the area where repair or substitution services are needed could be identified; this spatial information complements the signalling of the soluble signal molecules. Based on their resonance properties receptors located on the outer leaflet of the nuclear envelope should be able to read typical frequencies and pass them into the nucleus. Clearly this physical signalling must be blocked by the cell membrane to obviate the flow of information into adjacent cells.\n\n\nCONCLUSION\nIf the hypothesis can be proved experimentally, it should be possible to identify and verify characteristic infrared frequency patterns. The application of these signal frequencies onto cells would open entirely new possibilities in medicine and all biological disciplines specifically to influence cell growth and metabolism. Similar to this intracellular system, an extracellular signalling system with many new therapeutic options has to be discussed.",
"title": ""
},
{
"docid": "53c0564d82737d51ca9b7ea96a624be4",
"text": "In part 1 of this article, an occupational therapy model of practice for children with attention deficit hyperactivity disorder (ADHD) was described (Chu and Reynolds 2007). It addressed some specific areas of human functioning related to children with ADHD in order to guide the practice of occupational therapy. The model provides an approach to identifying and communicating occupational performance difficulties in relation to the interaction between the child, the environment and the demands of the task. A family-centred occupational therapy assessment and treatment package based on the model was outlined. The delivery of the package was underpinned by the principles of the family-centred care approach. Part 2 of this two-part article reports on a multicentre study, which was designed to evaluate the effectiveness and acceptability of the proposed assessment and treatment package and thereby to offer some validation of the delineation model. It is important to note that no treatment has yet been proved to ‘cure’ the condition of ADHD or to produce any enduring effects in affected children once the treatment is withdrawn. So far, the only empirically validated treatments for children with ADHD with substantial research evidence are psychostimulant medication, behavioural and educational management, and combined medication and behavioural management (DuPaul and Barkley 1993, A family-centred occupational therapy assessment and treatment package for children with attention deficit hyperactivity disorder (ADHD) was evaluated. The package involves a multidimensional evaluation and a multifaceted intervention, which are aimed at achieving a goodness-of-fit between the child, the task demands and the environment in which the child carries out the task. The package lasts for 3 months, with 12 weekly contacts with the child, parents and teacher. A multicentre study was carried out, with 20 occupational therapists participating. Following a 3-day training course, they implemented the package and supplied the data that they had collected from 20 children. The outcomes were assessed using the ADHD Rating Scales, pre-intervention and post-intervention. The results showed behavioural improvement in the majority of the children. The Measure of Processes of Care – 20-item version (MPOC-20) provided data on the parents’ perceptions of the family-centredness of the package and also showed positive ratings. The results offer some support for the package and the guiding model of practice, but caution should be exercised in generalising the results because of the small sample size, lack of randomisation, absence of a control group and potential experimenter effects from the research therapists. A larger-scale randomised controlled trial should be carried out to evaluate the efficacy of an improved package.",
"title": ""
},
{
"docid": "7d03c3e0e20b825809bebb5b2da1baed",
"text": "Flexoelectricity and the concomitant emergence of electromechanical size-effects at the nanoscale have been recently exploited to propose tantalizing concepts such as the creation of “apparently piezoelectric” materials without piezoelectric materials, e.g. graphene, emergence of “giant” piezoelectricity at the nanoscale, enhanced energy harvesting, among others. The aforementioned developments pertain primarily to hard ceramic crystals. In this work, we develop a nonlinear theoretical framework for flexoelectricity in soft materials. Using the concept of soft electret materials, we illustrate an interesting nonlinear interplay between the so-called Maxwell stress effect and flexoelectricity, and propose the design of a novel class of apparently piezoelectric materials whose constituents are intrinsically non-piezoelectric. In particular, we show that the electret-Maxwell stress based mechanism can be combined with flexoelectricity to achieve unprecedentedly high values of electromechanical coupling. Flexoelectricity is also important for a special class of soft materials: biological membranes. In this context, flexoelectricity manifests itself as the development of polarization upon changes in curvature. Flexoelectricity is found to be important in a number of biological functions including hearing, ion transport and in some situations where mechanotransduction is necessary. In this work, we present a simple linearized theory of flexoelectricity in biological membranes and some illustrative examples. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5116079b69aeb1858177429fabd10f80",
"text": "Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations at present lack geometric invariance, which limits their robustness for tasks such as classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (or MOP-CNN for short). This approach works by extracting CNN activations for local patches at multiple scales, followed by orderless VLAD pooling of these activations at each scale level and concatenating the result. This feature representation decisively outperforms global CNN activations and achieves state-of-the-art performance for scene classification on such challenging benchmarks as SUN397, MIT Indoor Scenes, and ILSVRC2012, as well as for instance-level retrieval on the Holidays dataset.",
"title": ""
},
{
"docid": "cb6c4f97fcefa003e890c8c4a97ff34b",
"text": "When interacting and communicating with virtual agents in immersive environments, the agents’ behavior should be believable and authentic. Thereby, one important aspect is a convincing auralization of their speech. In this work-in-progress paper a study design to evaluate the effect of adding directivity to speech sound source on the perceived social presence of a virtual agent is presented. Therefore, we describe the study design and discuss first results of a prestudy as well as consequential improvements of the design.",
"title": ""
},
{
"docid": "f68161697aed6d12598b0b9e34aeae68",
"text": "Automation in agriculture comes into play to increase productivity, quality and economic growth of the country. Fruit grading is an important process for producers which affects the fruits quality evaluation and export market. Although the grading and sorting can be done by the human, but it is slow, labor intensive, error prone and tedious. Hence, there is a need of an intelligent fruit grading system. In recent years, researchers had developed numerous algorithms for fruit sorting using computer vision. Color, textural and morphological features are the most commonly used to identify the diseases, maturity and class of the fruits. Subsequently, these features are used to train soft computing technique network. In this paper, use of image processing in agriculture has been reviewed so as to provide an insight to the use of vision based systems highlighting their advantages and disadvantages.",
"title": ""
},
{
"docid": "f5fdc2aac2caa3f8ac4648ebe599d707",
"text": "This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.",
"title": ""
},
{
"docid": "21528ffae0a6e4bd4fe9acfce5660473",
"text": "Ultrasound image quality is related to the receive beamformer’s ability. Delay and sum (DAS), a conventional beamformer, is combined with the coherence factor (CF) technique to suppress side lobe levels. The purpose of this study is to improve these beamformer’s abilities. It has been shown that extension of the receive aperture can improve the receive beamformer’s ability in radar studies. This paper shows that the focusing quality of CF and CF+DAS in medical ultrasound can be increased by extension of the receive aperture’s length in phased synthetic aperture (PSA) imaging. The 3-dB width of the main lobe in the receive beam related to CF focusing decreased to 0.55 mm using the proposed PSA compared to the conventional phased array (PHA) imaging, whose FWHM is about 0.9 mm. The clutter-to-total-energy ratio (CTR) represented by R20 dB showed an improvement of 50 and 33% for CF and CF+DAS beamformers, respectively, with PSA as compared to PHA. In addition, simulation results validated the effectiveness of PSA versus PHA. In applications where there are no important limitations on the SNR, PSA imaging is recommended as it increases the ability of the receive beamformer for better focusing.",
"title": ""
}
] | scidocsrr |
07191f5cf39dd695b5e3a2c034217899 | Ontologies in Ubiquitous Computing | [
{
"docid": "a172c51270d6e334b50dcc6233c54877",
"text": "m U biquitous computing enhances computer use by making many computers available throughout the physical environment, while making them effectively invisible to the user. This article explains what is new and different about the computer science involved in ubiquitous computing. First, it provides a brief overview of ubiquitous computing, then elaborates through a series of examples drawn from various subdisciplines of computer science: hardware components (e.g., chips), network protocols, interaction substrates (e.g., software for screens and pens), applications, privacy, and computational methods. Ubiquitous computing offers a framework for new and exciting research across the spectrum of computer science. Since we started this work at Xerox Palo Alto Research Center (PARC) in 1988 a few places have begun work on this possible next-generation computing environment in which each person is continually interacting with hundreds of nearby wirelessly interconnected computers. The goal is to achieve the most effective kind of technology, that which is essentially invisible to the user. To bring computers to this point while retaining their power will require radically new kinds of computers of all sizes and shapes to be available to each person. I call this future world \"Ubiquitous Comput ing\" (Ubicomp) [27]. The research method for ubiquitous computing is standard experimental computer science: the construction of working prototypes of the necessai-y infrastructure in sufficient quantity to debug the viability of the systems in everyday use; ourselves and a few colleagues serving as guinea pigs. This is",
"title": ""
}
] | [
{
"docid": "a5ed1ebf973e3ed7ea106e55795e3249",
"text": "The variable reluctance (VR) resolver is generally used instead of an optical encoder as a position sensor on motors for hybrid electric vehicles or electric vehicles owing to its reliability, low cost, and ease of installation. The commonly used conventional winding method for the VR resolver has disadvantages, such as complicated winding and unsuitability for mass production. This paper proposes an improved winding method that leads to simpler winding and better suitability for mass production than the conventional method. In this paper, through the design and finite element analysis for two types of output winding methods, the advantages and disadvantages of each method are presented, and the validity of the proposed winding method is verified. In addition, experiments with the VR resolver using the proposed winding method have been performed to verify its performance.",
"title": ""
},
{
"docid": "4071b0a0f3887a5ad210509e6ad5498a",
"text": "Nowadays, the IoT is largely dependent on sensors. The IoT devices are embedded with sensors and have the ability to communicate. A variety of sensors play a key role in networked devices in IoT. In order to facilitate the management of such sensors, this paper investigates how to use SNMP protocol, which is widely used in network device management, to implement sensors information management of IoT system. The principles and implement details to setup the MIB file, agent and manager application are discussed. A prototype system is setup to validate our methods. The test results show that because of its easy use and strong expansibility, SNMP is suitable and a bright way for sensors information management of IoT system.",
"title": ""
},
{
"docid": "0ecaccc94977a15cbaee4aaa08509295",
"text": "This paper reviews the use of socially interactive robots to assist in the therapy of children with autism. The extent to which the robots were successful in helping the children in their social, emotional, and communication deficits was investigated. Child-robot interactions were scrutinized with respect to the different target behaviours that are to be elicited from a child during therapy. These behaviours were thoroughly examined with respect to a child's development needs. Most importantly, experimental data from the surveyed works were extracted and analyzed in terms of the target behaviours and how each robot was used during a therapy session to achieve these behaviours. The study concludes by categorizing the different therapeutic roles that these robots were observed to play, and highlights the important design features that enable them to achieve high levels of effectiveness in autism therapy.",
"title": ""
},
{
"docid": "d41694f90694df023e62f4f6777beadf",
"text": "In many randomised trials researchers measure a continuous variable at baseline and again as an outcome assessed at follow up. Baseline measurements are common in trials of chronic conditions where researchers want to see whether a treatment can reduce pre-existing levels of pain, anxiety, hypertension, and the like. Statistical comparisons in such trials can be made in several ways. Comparison of follow up (posttreatment) scores will give a result such as “at the end of the trial, mean pain scores were 15 mm (95% confidence interval 10 to 20 mm) lower in the treatment group.” Alternatively a change score can be calculated by subtracting the follow up score from the baseline score, leading to a statement such as “pain reductions were 20 mm (16 to 24 mm) greater on treatment than control.” If the average baseline scores are the same in each group the estimated treatment effect will be the same using these two simple approaches. If the treatment is effective the statistical significance of the treatment effect by the two methods will depend on the correlation between baseline and follow up scores. If the correlation is low using the change score will add variation and the follow up score is more likely to show a significant result. Conversely, if the correlation is high using only the follow up score will lose information and the change score is more likely to be significant. It is incorrect, however, to choose whichever analysis gives a more significant finding. The method of analysis should be specified in the trial protocol. Some use change scores to take account of chance imbalances at baseline between the treatment groups. However, analysing change does not control for baseline imbalance because of regression to the mean : baseline values are negatively correlated with change because patients with low scores at baseline generally improve more than those with high scores. A better approach is to use analysis of covariance (ANCOVA), which, despite its name, is a regression method. In effect two parallel straight lines (linear regression) are obtained relating outcome score to baseline score in each group. They can be summarised as a single regression equation: follow up score = constant + a◊baseline score + b◊group where a and b are estimated coefficients and group is a binary variable coded 1 for treatment and 0 for control. The coefficient b is the effect of interest—the estimated difference between the two treatment groups. In effect an analysis of covariance adjusts each patient’s follow up score for his or her baseline score, but has the advantage of being unaffected by baseline differences. If, by chance, baseline scores are worse in the treatment group, the treatment effect will be underestimated by a follow up score analysis and overestimated by looking at change scores (because of regression to the mean). By contrast, analysis of covariance gives the same answer whether or not there is baseline imbalance. As an illustration, Kleinhenz et al randomised 52 patients with shoulder pain to either true or sham acupuncture. Patients were assessed before and after treatment using a 100 point rating scale of pain and function, with lower scores indicating poorer outcome. There was an imbalance between groups at baseline, with better scores in the acupuncture group (see table). Analysis of post-treatment scores is therefore biased. The authors analysed change scores, but as baseline and change scores are negatively correlated (about r = − 0.25 within groups) this analysis underestimates the effect of acupuncture. From analysis of covariance we get: follow up score = 24 + 0.71◊baseline score + 12.7◊group (see figure). The coefficient for group (b) has a useful interpretation: it is the difference between the mean change scores of each group. In the above example it can be interpreted as “pain and function score improved by an estimated 12.7 points more on average in the treatment group than in the control group.” A 95% confidence interval and P value can also be calculated for b (see table). The regression equation provides a means of prediction: a patient with a baseline score of 50, for example, would be predicted to have a follow up score of 72.2 on treatment and 59.5 on control. An additional advantage of analysis of covariance is that it generally has greater statistical power to detect a treatment effect than the other methods. For example, a trial with a correlation between baseline and follow",
"title": ""
},
{
"docid": "5d37d539295ca48aed86853406aa9d71",
"text": "-Finger print recognition is more popular attending system mostly used in many offices as it provides more accuracy. Machinery also system software based finger print recognition systems are mostly used. But its real time monitoring and remote intimation is not performed until now if wrong person is entering. Instant reporting to officer is necessary for maintaining absence/presence of staff members. This automatic reporting is necessary as officer may be remotely available. So, fingerprint identification based attendance system is proposed with real time remote monitoring. Proposed system requires Finger print sensor, data acquisition system for it, Processor (ARM 11), Ethernet/Wi-Fi Interface for Internet access and Smart phone for monitoring. WhatsApp is generally used by most of peoples and is easily accessible to all so generally preferred in this work. ARM 11 is necessary as it requires the Internet connection for What’ s App data transfer.",
"title": ""
},
{
"docid": "e4892dfe4da663c4044a78a8892010a8",
"text": "Turkey has been undertaking many projects to integrate Information and Communication Technology (ICT) sources into practice in the teaching-learning process in educational institutions. This research study sheds light on the use of ICT tools in primary schools in the social studies subject area, by considering various variables which affect the success of the implementation of the use of these tools. A survey was completed by 326 teachers who teach fourth and fifth grade at primary level. The results showed that although teachers are willing to use ICT resources and are aware of the existing potential, they are facing problems in relation to accessibility to ICT resources and lack of in-service training opportunities.",
"title": ""
},
{
"docid": "f1b96f805cbca7eaefdc1b5b0fa316c3",
"text": "This paper presents a comprehensive overview of the literature on the types, effects, conditions and user of Open 6 Government Data (OGD). The review analyses 101 academic studies about OGD which discuss at least one of the four factors 7 of OGD utilization: the different types of utilization, the effects of utilization, the key conditions, and the different users. Our 8 analysis shows that the majority of studies focus on the OGD provisions while assuming, but not empirically testing, various 9 forms of utilization. The paper synthesizes the hypothesized relations in a multi-dimensional framework of OGD utilization. 10 Based on the framework we suggest four future directions for research: 1) investigate the link between type of utilization and 11 type of users (e.g. journalists, citizens) 2) investigate the link between type of user and type of effect (e.g. societal, economic and 12 good governance benefits) 3) investigate the conditions that moderate OGD effects (e.g. policy, data quality) and 4) establishing 13 a causal link between utilization and OGD outcomes. 14",
"title": ""
},
{
"docid": "365b95202095942c4b2b43a5e6f6e04e",
"text": "Abstract. In this paper we use the contraction mapping theorem to obtain asymptotic stability results of the zero solution of a nonlinear neutral Volterra integro-differential equation with variable delays. Some conditions which allow the coefficient functions to change sign and do not ask the boundedness of delays are given. An asymptotic stability theorem with a necessary and sufficient condition is proved, which improve and extend the results in the literature. Two examples are also given to illustrate this work.",
"title": ""
},
{
"docid": "1969bf5a07349cc5a9b498e0437e41fe",
"text": "In this work, we tackle the problem of instance segmentation, the task of simultaneously solving object detection and semantic segmentation. Towards this goal, we present a model, called MaskLab, which produces three outputs: box detection, semantic segmentation, and direction prediction. Building on top of the Faster-RCNN object detector, the predicted boxes provide accurate localization of object instances. Within each region of interest, MaskLab performs foreground/background segmentation by combining semantic and direction prediction. Semantic segmentation assists the model in distinguishing between objects of different semantic classes including background, while the direction prediction, estimating each pixel's direction towards its corresponding center, allows separating instances of the same semantic class. Moreover, we explore the effect of incorporating recent successful methods from both segmentation and detection (e.g., atrous convolution and hypercolumn). Our proposed model is evaluated on the COCO instance segmentation benchmark and shows comparable performance with other state-of-art models.",
"title": ""
},
{
"docid": "798f8c412ac3fbe1ab1b867bc8ce68d0",
"text": "We introduce a new mobile system framework, SenSec, which uses passive sensory data to ensure the security of applications and data on mobile devices. SenSec constantly collects sensory data from accelerometers, gyroscopes and magnetometers and constructs the gesture model of how a user uses the device. SenSec calculates the sureness that the mobile device is being used by its owner. Based on the sureness score, mobile devices can dynamically request the user to provide active authentication (such as a strong password), or disable certain features of the mobile devices to protect user's privacy and information security. In this paper, we model such gesture patterns through a continuous n-gram language model using a set of features constructed from these sensors. We built mobile application prototype based on this model and use it to perform both user classification and user authentication experiments. User studies show that SenSec can achieve 75% accuracy in identifying the users and 71.3% accuracy in detecting the non-owners with only 13.1% false alarms.",
"title": ""
},
{
"docid": "0da78253d26ddba2b17dd76c4b4c697a",
"text": "In this work, a portable real-time wireless health monitoring system is developed. The system is used for remote monitoring of patients' heart rate and oxygen saturation in blood. The system was designed and implemented using ZigBee wireless technologies. All pulse oximetry data are transferred within a group of wireless personal area network (WPAN) to database computer server. The sensor modules were designed for low power operation with a program that can adjust power management depending on scenarios of power source and current power operation. The sensor unit consists of (1) two types of LEDs and photodiode packed in Velcro strip that is facing to a patient's fingertip; (2) Microcontroller unit for interfacing with ZigBee module, processing pulse oximetry data and storing some data before sending to base PC; (3) ZigBee module for communicating the data of pulse oximetry, ZigBee module gets all commands from microcontroller unit and it has a complete ZigBee stack inside and (4) Base node for receiving and storing the data before sending to PC.",
"title": ""
},
{
"docid": "ed63ebf895f1f37ba9b788c36b8e6cfc",
"text": "Melanocyte stem cells (McSCs) and mouse models of hair graying serve as useful systems to uncover mechanisms involved in stem cell self-renewal and the maintenance of regenerating tissues. Interested in assessing genetic variants that influence McSC maintenance, we found previously that heterozygosity for the melanogenesis associated transcription factor, Mitf, exacerbates McSC differentiation and hair graying in mice that are predisposed for this phenotype. Based on transcriptome and molecular analyses of Mitfmi-vga9/+ mice, we report a novel role for MITF in the regulation of systemic innate immune gene expression. We also demonstrate that the viral mimic poly(I:C) is sufficient to expose genetic susceptibility to hair graying. These observations point to a critical suppressor of innate immunity, the consequences of innate immune dysregulation on pigmentation, both of which may have implications in the autoimmune, depigmenting disease, vitiligo.",
"title": ""
},
{
"docid": "533b8bf523a1fb69d67939607814dc9c",
"text": "Docker is an open platform for developers and system administrators to build, ship, and run distributed applications using Docker Engine, a portable, lightweight runtime and packaging tool, and Docker Hub, a cloud service for sharing applications and automating workflows. The main advantage is that, Docker can get code tested and deployed into production as fast as possible. Different applications can be run over Docker containers with language independency. In this paper the performance of these Docker containers are evaluated based on their system performance. That is based on system resource utilization. Different benchmarking tools are used for this. Performance based on file system is evaluated using Bonnie++. Other system resources such as CPU utilization, memory utilization etc. are evaluated based on the benchmarking code (using psutil) developed using python. Detail results obtained from all these tests are also included in this paper. The results include CPU utilization, memory utilization, CPU count, CPU times, Disk partition, network I/O counter etc.",
"title": ""
},
{
"docid": "66e00cb4593c1bc97a10e0b80dcd6a8f",
"text": "OBJECTIVE\nTo determine the probable factors responsible for stress among undergraduate medical students.\n\n\nMETHODS\nThe qualitative descriptive study was conducted at a public-sector medical college in Islamabad, Pakistan, from January to April 2014. Self-administered open-ended questionnaires were used to collect data from first year medical students in order to study the factors associated with the new environment.\n\n\nRESULTS\nThere were 115 students in the study with a mean age of 19±6.76 years. Overall, 35(30.4%) students had mild to moderate physical problems, 20(17.4%) had severe physical problems and 60(52.2%) did not have any physical problem. Average stress score was 19.6±6.76. Major elements responsible for stress identified were environmental factors, new college environment, student abuse, tough study routines and personal factors.\n\n\nCONCLUSIONS\nMajority of undergraduate students experienced stress due to both academic and emotional factors.",
"title": ""
},
{
"docid": "f6553bf60969c422a07e1260a35b10c9",
"text": "Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system.",
"title": ""
},
{
"docid": "dcf8cacaa3f64d30cd46de1da2e880b7",
"text": "Here we discussed different dielectric substrate frequently used in microstrip patch antenna to enhance overall efficiency of antenna. Various substrates like foam, duroid, benzocyclobutane, roger 4350, epoxy, FR4, Duroid 6010 are in use to achieve better gain and bandwidth. A dielectric substrate is a insulator which is a main constituent of the microstrip structure, where a thicker substrate is considered because it has direct proportionality with bandwidth whereas dielectric constant is inversely proportional to bandwidth as lower the relative permittivity better the fringing is achieved. Another factor that impact directly is loss tangent it shows inverse relation with efficiency the dilemma is here is that substrate with lower loss tangent is costlier. A clear pros and cons are discussed here of different substrates for judicious selection. A substrate gives mechanical strength to the antenna.",
"title": ""
},
{
"docid": "a2514f994292481d0fe6b37afe619cb5",
"text": "The purpose of this tutorial is to present an overview of various information hiding techniques. A brief history of steganography is provided along with techniques that were used to hide information. Text, image and audio based information hiding techniques are discussed. This paper also provides a basic introduction to digital watermarking. 1. History of Information Hiding The idea of communicating secretly is as old as communication itself. In this section, we briefly discuss the historical development of information hiding techniques such as steganography/ watermarking. Early steganography was messy. Before phones, before mail, before horses, messages were sent on foot. If you wanted to hide a message, you had two choices: have the messenger memorize it, or hide it on the messenger. While information hiding techniques have received a tremendous attention recently, its application goes back to Greek times. According to Greek historian Herodotus, the famous Greek tyrant Histiaeus, while in prison, used unusual method to send message to his son-in-law. He shaved the head of a slave to tattoo a message on his scalp. Histiaeus then waited until the hair grew back on slave’s head prior to sending him off to his son-inlaw. The second story also came from Herodotus, which claims that a soldier named Demeratus needed to send a message to Sparta that Xerxes intended to invade Greece. Back then, the writing medium was written on wax-covered tablet. Demeratus removed the wax from the tablet, wrote the secret message on the underlying wood, recovered the tablet with wax to make it appear as a blank tablet and finally sent the document without being detected. Invisible inks have always been a popular method of steganography. Ancient Romans used to write between lines using invisible inks based on readily available substances such as fruit juices, urine and milk. When heated, the invisible inks would darken, and become legible. Ovid in his “Art of Love” suggests using milk to write invisibly. Later chemically affected sympathetic inks were developed. Invisible inks were used as recently as World War II. Modern invisible inks fluoresce under ultraviolet light and are used as anti-counterfeit devices. For example, \"VOID\" is printed on checks and other official documents in an ink that appears under the strong ultraviolet light used for photocopies. The monk Johannes Trithemius, considered one of the founders of modern cryptography, had ingenuity in spades. His three volume work Steganographia, written around 1500, describes an extensive system for concealing secret messages within innocuous texts. On its surface, the book seems to be a magical text, and the initial reaction in the 16th century was so strong that Steganographia was only circulated privately until publication in 1606. But less than five years ago, Jim Reeds of AT&T Labs deciphered mysterious codes in the third volume, showing that Trithemius' work is more a treatise on cryptology than demonology. Reeds' fascinating account of the code breaking process is quite readable. One of Trithemius' schemes was to conceal messages in long invocations of the names of angels, with the secret message appearing as a pattern of letters within the words. For example, as every other letter in every other word: padiel aporsy mesarpon omeuas peludyn malpreaxo which reveals \"prymus apex.\" Another clever invention in Steganographia was the \"Ave Maria\" cipher. The book contains a series of tables, each of which has a list of words, one per letter. To code a message, the message letters are replaced by the corresponding words. If the tables are used in order, one table per letter, then the coded message will appear to be an innocent prayer. The earliest actual book on steganography was a four hundred page work written by Gaspari Schott in 1665 and called Steganographica. Although most of the ideas came from Trithemius, it was a start. Further development in the field occurred in 1883, with the publication of Auguste Kerchoffs’ Cryptographie militaire. Although this work was mostly about cryptography, it describes some principles that are worth keeping in mind when designing a new steganographic system.",
"title": ""
},
{
"docid": "436369a1187f436290ae9b61f3e9ee1e",
"text": "In this paper we propose a sub-band energy based end-ofutterance algorithm that is capable of detecting the time instant when the user has stopped speaking. The proposed algorithm finds the time instant at which many enough sub-band spectral energy trajectories fall and stay for a pre-defined fixed time below adaptive thresholds, i.e. a non-speech period is detected after the end of the utterance. With the proposed algorithm a practical speech recognition system can give timely feedback for the user, thereby making the behaviour of the speech recognition system more predictable and similar across different usage environments and noise conditions. The proposed algorithm is shown to be more accurate and noise robust than the previously proposed approaches. Experiments with both isolated command word recognition and continuous digit recognition in various noise conditions verify the viability of the proposed approach with an average proper endof-utterance detection rate of around 94% in both cases, representing 43% error rate reduction over the most competitive previously published method.",
"title": ""
},
{
"docid": "49f1d3ebaf3bb3e575ac3e40101494d9",
"text": "This paper discusses the current status of research on fraud detection undertaken a.s part of the European Commissionfunded ACTS ASPECT (Advanced Security for Personal Communications Technologies) project, by Royal Holloway University of London. Using a recurrent neural network technique, we uniformly distribute prototypes over Toll Tickets. sampled from the U.K. network operator, Vodafone. The prototypes, which continue to adapt to cater for seasonal or long term trends, are used to classify incoming Toll Tickets to form statistical behaviour proFdes covering both the short and long-term past. These behaviour profiles, maintained as probability distributions, comprise the input to a differential analysis utilising a measure known as the HeUinger distance[5] between them as an alarm criteria. Fine tuning the system to minimise the number of false alarms poses a significant ask due to the low fraudulent/non fraudulent activity ratio. We benefit from using unsupervised learning in that no fraudulent examples ate requited for training. This is very relevant considering the currently secure nature of GSM where fraud scenarios, other than Subscription Fraud, have yet to manifest themselves. It is the aim of ASPECT to be prepared for the would-be fraudster for both GSM and UMTS, Introduction When a mobile originated phone call is made or various inter-call criteria are met he cells or switches that a mobile phone is communicating with produce information pertaining to the call attempt. These data records, for billing purposes, are referred to as Toll Tickets. Toll Tickets contain a wealth of information about the call so that charges can be made to the subscriber. By considering well studied fraud indicators these records can also be used to detect fraudulent activity. By this we mean i terrogating a series of recent Toll Tickets and comparing a function of the various fields with fixed criteria, known as triggers. A trigger, if activated, raises an alert status which cumulatively would lead to investigation by the network operator. Some xample fraud indicators are that of a new subscriber making long back-to-back international calls being indicative of direct call selling or short back-to-back calls to a single land number indicating an attack on a PABX system. Sometimes geographical information deduced from the cell sites visited in a call can indicate cloning. This can be detected through setting a velocity trap. Fixed trigger criteria can be set to catch such extremes of activity, but these absolute usage criteria cannot trap all types of fraud. An alternative approach to the problem is to perform a differential analysis. Here we develop behaviour profiles relating to the mobile phone’s activity and compare its most recent activities with a longer history of its usage. Techniques can then be derived to determine when the mobile phone’s behaviour changes ignificantly. One of the most common indicators of fraud is a significant change in behaviour. The performance expectations of such a system must be of prime concern when developing any fraud detection strategy. To implement a real time fraud detection tool on the Vodafone network in the U.K, it was estimated that, on average, the system would need to be able to process around 38 Toll Tickets per second. This figure varied with peak and off-peak usage and also had seasonal trends. The distribution of the times that calls are made and the duration of each call is highly skewed. Considering all calls that are made in the U.K., including the use of supplementary services, we found the average call duration to be less than eight seconds, hardly time to order a pizza. In this paper we present one of the methods developed under ASPECT that tackles the problem of skewed distributions and seasonal trends using a recurrent neural network technique that is based around unsupervised learning. We envisage this technique would form part of a larger fraud detection suite that also comprises a rule based fraud detection tool and a neural network fraud detection tool that uses supervised learning on a multi-layer perceptron. Each of the systems has its strengths and weaknesses but we anticipate that the hybrid system will combine their strengths. 9 From: AAAI Technical Report WS-97-07. Compilation copyright © 1997, AAAI (www.aaai.org). All rights reserved.",
"title": ""
}
] | scidocsrr |
b9ccbb7e14686ad54dda551935532135 | Energy Harvesting Using a Low-Cost Rectenna for Internet of Things (IoT) Applications | [
{
"docid": "3d9fbf84b4a9d6524a3f87d0b6869b99",
"text": "The idea of wireless power transfer (WPT) has been around since the inception of electricity. In the late 19th century, Nikola Tesla described the freedom to transfer energy between two points without the need for a physical connection to a power source as an \"all-surpassing importance to man\". A truly wireless device, capable of being remotely powered, not only allows the obvious freedom of movement but also enables devices to be more compact by removing the necessity of a large battery. Applications could leverage this reduction in size and weight to increase the feasibility of concepts such as paper-thin, flexible displays, contact-lens-based augmented reality, and smart dust, among traditional point-to-point power transfer applications. While several methods of wireless power have been introduced since Tesla's work, including near-field magnetic resonance and inductive coupling, laser-based optical power transmission, and far-field RF/microwave energy transmission, only RF/microwave and laser-based systems are truly long-range methods. While optical power transmission certainly has merit, its mechanisms are outside of the scope of this article and will not be discussed.",
"title": ""
},
{
"docid": "c41efa28806b3ac3d2b23d9e52b85193",
"text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.",
"title": ""
}
] | [
{
"docid": "d71faafdcf1b97951e979f13dbe91cb2",
"text": "We explore six challenges for neural machine translation: domain mismatch, amount of training data, rare words, long sentences, word alignment, and beam search. We show both deficiencies and improvements over the quality of phrasebased statistical machine translation.",
"title": ""
},
{
"docid": "7146615b79dd39e358dd148e57a01fdb",
"text": "Graphs are one of the key data structures for many real-world computing applications and the importance of graph analytics is ever-growing. While existing software graph processing frameworks improve programmability of graph analytics, underlying general purpose processors still limit the performance and energy efficiency of graph analytics. We architect a domain-specific accelerator, Graphicionado, for high-performance, energy-efficient processing of graph analytics workloads. For efficient graph analytics processing, Graphicionado exploits not only data structure-centric datapath specialization, but also memory subsystem specialization, all the while taking advantage of the parallelism inherent in this domain. Graphicionado augments the vertex programming paradigm, allowing different graph analytics applications to be mapped to the same accelerator framework, while maintaining flexibility through a small set of reconfigurable blocks. This paper describes Graphicionado pipeline design choices in detail and gives insights on how Graphicionado combats application execution inefficiencies on general-purpose CPUs. Our results show that Graphicionado achieves a 1.76-6.54x speedup while consuming 50-100x less energy compared to a state-of-the-art software graph analytics processing framework executing 32 threads on a 16-core Haswell Xeon processor.",
"title": ""
},
{
"docid": "863e71cf1c1eddf3c6ceac400670e6f7",
"text": "This paper provides a brief overview to four major types of causal models for health-sciences research: Graphical models (causal diagrams), potential-outcome (counterfactual) models, sufficient-component cause models, and structural-equations models. The paper focuses on the logical connections among the different types of models and on the different strengths of each approach. Graphical models can illustrate qualitative population assumptions and sources of bias not easily seen with other approaches; sufficient-component cause models can illustrate specific hypotheses about mechanisms of action; and potential-outcome and structural-equations models provide a basis for quantitative analysis of effects. The different approaches provide complementary perspectives, and can be employed together to improve causal interpretations of conventional statistical results.",
"title": ""
},
{
"docid": "afe4c8e46449bfa37a04e67595d4537b",
"text": "Gamification is the use of game design elements in non-game settings to engage participants and encourage desired behaviors. It has been identified as a promising technique to improve students' engagement which could have a positive impact on learning. This study evaluated the learning effectiveness and engagement appeal of a gamified learning activity targeted at the learning of C-programming language. Furthermore, the study inquired into which gamified learning activities were more appealing to students. The study was conducted using the mixed-method sequential explanatory protocol. The data collected and analysed included logs, questionnaires, and pre- and post-tests. The results of the evaluation show positive effects on the engagement of students toward the gamified learning activities and a moderate improvement in learning outcomes. Students reported different motivations for continuing and stopping activities once they completed the mandatory assignment. The preferences for different gamified activities were also conditioned by academic milestones.",
"title": ""
},
{
"docid": "6c4b9b5383269ed47d2077068652f0b7",
"text": "Security issues in computer networks have focused on attacks on end systems and the control plane. An entirely new class of emerging network attacks aims at the data plane of the network. Data plane forwarding in network routers has traditionally been implemented with custom-logic hardware, but recent router designs increasingly use software-programmable network processors for packet forwarding. These general-purpose processing devices exhibit software vulnerabilities and are susceptible to attacks. We demonstrate-to our knowledge the first-practical attack that exploits a vulnerability in packet processing software to launch a devastating denial-of-service attack from within the network infrastructure. This attack uses only a single attack packet to consume the full link bandwidth of the router's outgoing link. We also present a hardware-based defense mechanism that can detect situations where malicious packets try to change the operation of the network processor. Using a hardware monitor, our NetFPGA-based prototype system checks every instruction executed by the network processor and can detect deviations from correct processing within four clock cycles. A recovery system can restore the network processor to a safe state within six cycles. This high-speed detection and recovery system can ensure that network processors can be protected effectively and efficiently from this new class of attacks.",
"title": ""
},
{
"docid": "26a599c22c173f061b5d9579f90fd888",
"text": "markov logic an interface layer for artificial markov logic an interface layer for artificial shinichi tsukada in size 22 syyjdjbook.buncivy yumina ooba in size 24 ajfy7sbook.ztoroy okimi in size 15 edemembookkey.16mb markov logic an interface layer for artificial intelligent systems (ai-2) ubc computer science interface layer for artificial intelligence daniel lowd essential principles for autonomous robotics markovlogic: aninterfacelayerfor arti?cialintelligence official encyclopaedia of sheffield united football club hot car hot car firext answers || 2007 acura tsx hitch manual course syllabus university of texas at dallas jump frog jump cafebr 1994 chevy silverado 1500 engine ekpbs readings in earth science alongs johnson owners manual pdf firext thomas rescues the diesels cafebr dead sea scrolls and the jewish origins of christianity install gimp help manual by iitsuka asao vox diccionario abreviado english spanis mdmtv nobutaka in size 26 bc13xqbookog.xxuz mechanisms in b cell neoplasia 1992 workshop at the spocks world diane duane nabbit treasury of saints fiores reasoning with probabilistic university of texas at austin gp1300r yamaha waverunner service manua by takisawa tomohide repair manual haier hpr10xc6 air conditioner birdz mexico icons mexico icons oobags asus z53 manual by hatsutori yoshino industrial level measurement by haruyuki morimoto",
"title": ""
},
{
"docid": "cb1fc7a4769141429dc7b41a8d8b7cb8",
"text": "Today, by integrating Near Field Communication (NFC) technology in smartphones, bank cards and payment terminals, a purchase transaction can be executed immediately without any physical contact, without entering a PIN code or a signature. Europay Mastercard Visa (EMV) is the standard dedicated for securing contactless-NFC payment transactions. However, it does not ensure two main security proprieties: (1) the authentication of the payment terminal to the client's payment device, (2) the confidentiality of personal banking data. In this paper, we first of all detail EMV standard and its security vulnerabilities. Then, we propose a solution that enhances the EMV protocol by adding a new security layer aiming to solve EMV weaknesses. We formally check the correctness of the proposal using a security verification tool called Scyther.",
"title": ""
},
{
"docid": "ef4e7445ec9bbbfc8d25d92a16042f88",
"text": "CONCRETE",
"title": ""
},
{
"docid": "121a8470fcbf121e5f4c42594c6d24fe",
"text": "Research has consistently found that school students who do not identify as self-declared completely heterosexual are at increased risk of victimization by bullying from peers. This study examined heterosexual and nonheterosexual university students' involvement in both traditional and cyber forms of bullying, as either bullies or victims. Five hundred twenty-eight first-year university students (M=19.52 years old) were surveyed about their sexual orientation and their bullying experiences over the previous 12 months. The results showed that nonheterosexual young people reported higher levels of involvement in traditional bullying, both as victims and perpetrators, in comparison to heterosexual students. In contrast, cyberbullying trends were generally found to be similar for heterosexual and nonheterosexual young people. Gender differences were also found. The implications of these results are discussed in terms of intervention and prevention of the victimization of nonheterosexual university students.",
"title": ""
},
{
"docid": "4a6c2d388bb114751b2ce9c6df55beab",
"text": "To support people trying to lose weight and stay healthy, more and more fitness apps have sprung up including the ability to track both calories intake and expenditure. Users of such apps are part of a wider \"quantified self\" movement and many opt-in to publicly share their logged data. In this paper, we use public food diaries of more than 4,000 long-term active MyFitnessPal users to study the characteristics of a (un-)successful diet. Concretely, we train a machine learning model to predict repeatedly being over or under self-set daily calories goals and then look at which features contribute to the model's prediction. Our findings include both expected results, such as the token \"mcdonalds\" or the category \"dessert\" being indicative for being over the calories goal, but also less obvious ones such as the difference between pork and poultry concerning dieting success, or the use of the \"quick added calories\" functionality being indicative of over-shooting calorie-wise. This study also hints at the feasibility of using such data for more in-depth data mining, e.g., looking at the interaction between consumed foods such as mixing protein- and carbohydrate-rich foods. To the best of our knowledge, this is the first systematic study of public food diaries.",
"title": ""
},
{
"docid": "77d2255e0a2d77ea8b2682937b73cc7d",
"text": "Recommendation plays an increasingly important role in our daily lives. Recommender systems automatically suggest to a user items that might be of interest to her. Recent studies demonstrate that information from social networks can be exploited to improve accuracy of recommendations. In this paper, we present a survey of collaborative filtering (CF) based social recommender systems. We provide a brief overview over the task of recommender systems and traditional approaches that do not use social network information. We then present how social network information can be adopted by recommender systems as additional input for improved accuracy. We classify CF-based social recommender systems into two categories: matrix factorization based social recommendation approaches and neighborhood based social recommendation approaches. For each category, we survey and compare several represen-",
"title": ""
},
{
"docid": "b6df4868ee1496e581e8b76ca8fb165f",
"text": "Through AspectJ, aspect-oriented programming (AOP) is becoming of increasing interest and availability to Java programmers as it matures as a methodology for improved software modularity via the separation of cross-cutting concerns. AOP proponents often advocate a development strategy where Java programmers write the main application, ignoring cross-cutting concerns, and then AspectJ programmers, domain experts in their specific concerns, weave in the logic for these more specialized cross-cutting concerns. However, several authors have recently debated the merits of this strategy by empirically showing certain drawbacks. The proposed solutions paint a different development strategy where base code and aspect programmers are aware of each other (to varying degrees) and interactions between cross-cutting concerns are planned for early on.\n Herein we explore new possibilities in the language design space that open up when the base code is aware of cross-cutting aspects. Using our insights from this exploration we concretize these new possibilities by extending AspectJ with concise yet powerful constructs, while maintaining full backwards compatibility. These new constructs allow base code and aspects to cooperate in ways that were previously not possible: arbitrary blocks of code can be advised, advice can be explicitly parameterized, base code can guide aspects in where to apply advice, and aspects can statically enforce new constraints upon the base code that they advise. These new techniques allow aspect modularity and program safety to increase. We illustrate the value of our extensions through an example based on transactions.",
"title": ""
},
{
"docid": "8c232cd0cea7714dde71669024d3d811",
"text": "This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored. Moreover, an algorithmic as well as an experimental comparison with existing incremental algorithms addressing the same problem is presented. In most settings, the new algorithms proposed clearly outperform the existing ones.",
"title": ""
},
{
"docid": "b31235bf87cc8ebd243fd8c52c63f8d4",
"text": "The dual-polarized corporate-feed waveguide slot array antenna is designed for the 60 GHz band. Using the multi-layer structure, we have realized dual-polarization operation. Even though the gain is approximately 1 dB lower than the antenna for the single polarization due to the -15dB cross-polarization level in 8=58°, this antenna still shows very high gain over 32 dBi over the broad bandwidth. This antenna will be fabricated and measured in future.",
"title": ""
},
{
"docid": "c05a32fdc2344cb4a6831f5cc033820f",
"text": "We have constructed a wave-front sensor to measure the irregular as well as the classical aberrations of the eye, providing a more complete description of the eye's aberrations than has previously been possible. We show that the wave-front sensor provides repeatable and accurate measurements of the eye's wave aberration. The modulation transfer function of the eye computed from the wave-front sensor is in fair, though not complete, agreement with that obtained under similar conditions on the same observers by use of the double-pass and the interferometric techniques. Irregular aberrations, i.e., those beyond defocus, astigmatism, coma, and spherical aberration, do not have a large effect on retinal image quality in normal eyes when the pupil is small (3 mm). However, they play a substantial role when the pupil is large (7.3-mm), reducing visual performance and the resolution of images of the living retina. Although the pattern of aberrations varies from subject to subject, aberrations, including irregular ones, are correlated in left and right eyes of the same subject, indicating that they are not random defects.",
"title": ""
},
{
"docid": "11c4f0610d701c08516899ebf14f14c4",
"text": "Histone post-translational modifications impact many aspects of chromatin and nuclear function. Histone H4 Lys 20 methylation (H4K20me) has been implicated in regulating diverse processes ranging from the DNA damage response, mitotic condensation, and DNA replication to gene regulation. PR-Set7/Set8/KMT5a is the sole enzyme that catalyzes monomethylation of H4K20 (H4K20me1). It is required for maintenance of all levels of H4K20me, and, importantly, loss of PR-Set7 is catastrophic for the earliest stages of mouse embryonic development. These findings have placed PR-Set7, H4K20me, and proteins that recognize this modification as central nodes of many important pathways. In this review, we discuss the mechanisms required for regulation of PR-Set7 and H4K20me1 levels and attempt to unravel the many functions attributed to these proteins.",
"title": ""
},
{
"docid": "e9c4877bca5f1bfe51f97818cc4714fa",
"text": "INTRODUCTION Gamification refers to the application of game dynamics, mechanics, and frameworks into non-game settings. Many educators have attempted, with varying degrees of success, to effectively utilize game dynamics to increase student motivation and achievement in the classroom. In an effort to better understand how gamification can effectively be utilized to this end, presented here is a review of existing literature on the subject as well as a case study on three different applications of gamification in the post-secondary setting. This analysis reveals that the underlying dynamics that make games engaging are largely already recognized and utilized in modern pedagogical practices, although under different designations. This provides some legitimacy to a practice that is sometimes dismissed as superficial, and also provides a way of formulating useful guidelines for those wishing to utilize the power of games to motivate student achievement. RELATED WORK The first step of this study was to review literature related to the use of gamification in education. This was undertaken in order to inform the subsequent case studies. Several works were reviewed with the intention of finding specific game dynamics that were met with a certain degree of success across a number of circumstances. To begin, Jill Laster [10] provides a brief summary of the early findings of Lee Sheldon, an assistant professor at Indiana University at Bloomington and the author of The Multiplayer Classroom: Designing Coursework as a Game [16]. Here, Sheldon reports that the gamification of his class on multiplayer game design at Indiana University at Bloomington in 2010 was a success, with the average grade jumping a full letter grade from the previous year [10]. Sheldon gamified his class by renaming the performance of presentations as 'completing quests', taking tests as 'fighting monsters', writing papers as 'crafting', and receiving letter grades as 'gaining experience points'. In particular, he notes that changing the language around grades celebrates getting things right rather than punishing getting things wrong [10]. Although this is plausible, this example is included here first because it points to the common conception of what gamifying a classroom means: implementing game components by simply trading out the parlance of pedagogy for that of gaming culture. Although its intentions are good, it is this reduction of game design to its surface characteristics that Elizabeth Lawley warns is detrimental to the successful gamification of a classroom [5]. Lawley, a professor of interactive games and media at the Rochester Institute of Technology (RIT), notes that when implemented properly, \"gamification can help enrich educational experiences in a way that students will recognize and respond to\" [5]. However, she warns that reducing the complexity of well designed games to their surface elements (i.e. badges and experience points) falls short of engaging students. She continues further, suggesting that beyond failing to engage, limiting the implementation of game dynamics to just the surface characteristics can actually damage existing interest and engagement [5]. Lawley is not suggesting that game elements should be avoided, but rather she is stressing the importance of allowing them to surface as part of a deeper implementation that includes the underlying foundations of good game design. Upon reviewing the available literature, certain underlying dynamics and concepts found in game design are shown to be more consistently successful than others when applied to learning environments, these are: o Freedom to Fail o Rapid Feedback o Progression o Storytelling Freedom to Fail Game design often encourages players to experiment without fear of causing irreversible damage by giving them multiple lives, or allowing them to start again at the most recent 'checkpoint'. Incorporating this 'freedom to fail' into classroom design is noted to be an effective dynamic in increasing student engagement [7,9,11,15]. If students are encouraged to take risks and experiment, the focus is taken away from final results and re-centered on the process of learning instead. The effectiveness of this change in focus is recognized in modern pedagogy as shown in the increased use of formative assessment. Like the game dynamic of having the 'freedom to fail', formative assessment focuses on the process of learning rather than the end result by using assessment to inform subsequent lessons and separating assessment from grades whenever possible [17]. This can mean that the student is using ongoing self assessment, or that the teacher is using",
"title": ""
},
{
"docid": "4f287c788c7e95bf350a998650ff6221",
"text": "Wireless sensor network has become an emerging technology due its wide range of applications in object tracking and monitoring, military commands, smart homes, forest fire control, surveillance, etc. Wireless sensor network consists of thousands of miniature devices which are called sensors but as it uses wireless media for communication, so security is the major issue. There are number of attacks on wireless of which selective forwarding attack is one of the harmful attacks. This paper describes selective forwarding attack and detection techniques against selective forwarding attacks which have been proposed by different researchers. In selective forwarding attacks, malicious nodes act like normal nodes and selectively drop packets. The selective forwarding attack is a serious threat in WSN. Identifying such attacks is very difficult and sometimes impossible. This paper also presents qualitative analysis of detection techniques in tabular form. Keywordswireless sensor network, attacks, selective forwarding attacks, malicious nodes.",
"title": ""
},
{
"docid": "1195635049c88da8b37a66ca1e85090b",
"text": "Temporal-di erence (TD) learning can be used not just to predict rewards, as is commonly done in reinforcement learning, but also to predict states, i.e., to learn a model of the world's dynamics. We present theory and algorithms for intermixing TD models of the world at di erent levels of temporal abstraction within a single structure. Such multi-scale TD models can be used in model-based reinforcement-learning architectures and dynamic programming methods in place of conventional Markov models. This enables planning at higher and varied levels of abstraction, and, as such, may prove useful in formulating methods for hierarchical or multi-level planning and reinforcement learning. In this paper we treat only the prediction problem|that of learning a model and value function for the case of xed agent behavior. Within this context, we establish the theoretical foundations of multi-scale models and derive TD algorithms for learning them. Two small computational experiments are presented to test and illustrate the theory. This work is an extension and generalization of the work of Singh (1992), Dayan (1993), and Sutton & Pinette (1985). 1 Multi-Scale Planning and Modeling Model-based reinforcement learning o ers a potentially elegant solution to the problem of integrating planning into a real-time learning and decisionmaking agent (Sutton, 1990; Barto et al., 1995; Peng & Williams, 1993, Moore & Atkeson, 1994; Dean et al., in prep). However, most current reinforcementlearning systems assume a single, xed time step: actions take one step to complete, and their immediate consequences become available after one step. This makes it di cult to learn and plan at di erent time scales. For example, commuting to work involves planning at a high level about which route to drive (or whether to take the train) and at a low level about how to steer, when to brake, etc. Planning is necessary at both levels in order to optimize precise low-level movements without becoming lost in a sea of detail when making decisions at a high level. Moreover, these levels cannot be kept totally distinct and separate. They must interrelate at least in the sense that the actions and plans at a high levels must be turned into actual, moment-by-moment decisions at the lowest level. The need for hierarchical and abstract planning is a fundamental problem in AI whether or not one uses the reinforcement-learning framework (e.g., Fikes et al., 1972; Sacerdoti, 1977; Kuipers, 1979; Laird et al., 1986; Korf, 1985; Minton, 1988; Watkins, 1989; Drescher, 1991; Ring, 1991; Wixson, 1991; Schmidhuber, 1991; Tenenberg et al., 1992; Kaelbling, 1993; Lin, 1993; Dayan & Hinton, 1993; Dejong, 1994; Chrisman, 1994; Hansen, 1994; Dean & Lin, in prep). We do not propose to fully solve it in this paper. Rather, we develop an approach to multiple-time-scale modeling of the world that may eventually be useful in such a solution. Our approach is to extend temporal-di erence (TD) methods, which are commonly used in reinforcement learning systems to learn value functions, such that they can be used to learn world models. When TD methods are used, the predictions of the models can naturally extend beyond a single time step. As we will show, they can even make predictions that are not speci c to a single time scale, but intermix many such scales, with no loss of performance when the models are used. This approach is an extension of the ideas of Singh (1992), Dayan (1993), and Sutton & Pinette",
"title": ""
},
{
"docid": "be3204a5a4430cc3150bf0368a972e38",
"text": "Deep learning has exploded in the public consciousness, primarily as predictive and analytical products suffuse our world, in the form of numerous human-centered smart-world systems, including targeted advertisements, natural language assistants and interpreters, and prototype self-driving vehicle systems. Yet to most, the underlying mechanisms that enable such human-centered smart products remain obscure. In contrast, researchers across disciplines have been incorporating deep learning into their research to solve problems that could not have been approached before. In this paper, we seek to provide a thorough investigation of deep learning in its applications and mechanisms. Specifically, as a categorical collection of state of the art in deep learning research, we hope to provide a broad reference for those seeking a primer on deep learning and its various implementations, platforms, algorithms, and uses in a variety of smart-world systems. Furthermore, we hope to outline recent key advancements in the technology, and provide insight into areas, in which deep learning can improve investigation, as well as highlight new areas of research that have yet to see the application of deep learning, but could nonetheless benefit immensely. We hope this survey provides a valuable reference for new deep learning practitioners, as well as those seeking to innovate in the application of deep learning.",
"title": ""
}
] | scidocsrr |
0f1ae26827d07ebe752c0a88308a6659 | A Measure for Objective Evaluation of Image Segmentation Algorithms | [
{
"docid": "db8325925cb9fd1ebdcf7480735f5448",
"text": "A general nonparametric technique is proposed for the analysis of a complex multimodal feature space and to delineate arbitrarily shaped clusters in it. The basic computational module of the technique is an old pattern recognition procedure, the mean shift. We prove for discrete data the convergence of a recursive mean shift procedure to the nearest stationary point of the underlying density function and thus its utility in detecting the modes of the density. The equivalence of the mean shift procedure to the Nadaraya–Watson estimator from kernel regression and the robust M-estimators of location is also established. Algorithms for two low-level vision tasks, discontinuity preserving smoothing and image segmentation are described as applications. In these algorithms the only user set parameter is the resolution of the analysis, and either gray level or color images are accepted as input. Extensive experimental results illustrate their excellent performance.",
"title": ""
}
] | [
{
"docid": "ccc3c2ee7a08eb239443d5773707d782",
"text": "We introduce an iterative normalization and clustering method for single-cell gene expression data. The emerging technology of single-cell RNA-seq gives access to gene expression measurements for thousands of cells, allowing discovery and characterization of cell types. However, the data is confounded by technical variation emanating from experimental errors and cell type-specific biases. Current approaches perform a global normalization prior to analyzing biological signals, which does not resolve missing data or variation dependent on latent cell types. Our model is formulated as a hierarchical Bayesian mixture model with cell-specific scalings that aid the iterative normalization and clustering of cells, teasing apart technical variation from biological signals. We demonstrate that this approach is superior to global normalization followed by clustering. We show identifiability and weak convergence guarantees of our method and present a scalable Gibbs inference algorithm. This method improves cluster inference in both synthetic and real single-cell data compared with previous methods, and allows easy interpretation and recovery of the underlying structure and cell types.",
"title": ""
},
{
"docid": "e5874c373f9bc4565249f335560023ff",
"text": "We propose a multi-wing harmonium model for mining multimedia data that extends and improves on earlier models based on two-layer random fields, which capture bidirectional dependencies between hidden topic aspects and observed inputs. This model can be viewed as an undirected counterpart of the two-layer directed models such as LDA for similar tasks, but bears significant difference in inference/learning cost tradeoffs, latent topic representations, and topic mixing mechanisms. In particular, our model facilitates efficient inference and robust topic mixing, and potentially provides high flexibilities in modeling the latent topic spaces. A contrastive divergence and a variational algorithm are derived for learning. We specialized our model to a dual-wing harmonium for captioned images, incorporating a multivariate Poisson for word-counts and a multivariate Gaussian for color histogram. We present empirical results on the applications of this model to classification, retrieval and image annotation on news video collections, and we report an extensive comparison with various extant models.",
"title": ""
},
{
"docid": "dd545adf1fba52e794af4ee8de34fc60",
"text": "We propose solving continuous parametric simulation optimizations using a deterministic nonlinear optimization algorithm and sample-path simulations. The optimization problem is written in a modeling language with a simulation module accessed with an external function call. Since we allow no changes to the simulation code at all, we propose using a quadratic approximation of the simulation function to obtain derivatives. Results on three different queueing models are presented that show our method to be effective on a variety of practical problems.",
"title": ""
},
{
"docid": "ad9536f85fd5996bd6457b8ed40e11d7",
"text": "Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.",
"title": ""
},
{
"docid": "d319a17ad2fa46e0278e0b0f51832f4b",
"text": "Automatic Essay Assessor (AEA) is a system that utilizes information retrieval techniques such as Latent Semantic Analysis (LSA), Probabilistic Latent Semantic Analysis (PLSA), and Latent Dirichlet Allocation (LDA) for automatic essay grading. The system uses learning materials and relatively few teacher-graded essays for calibrating the scoring mechanism before grading. We performed a series of experiments using LSA, PLSA and LDA for document comparisons in AEA. In addition to comparing the methods on a theoretical level, we compared the applicability of LSA, PLSA, and LDA to essay grading with empirical data. The results show that the use of learning materials as training data for the grading model outperforms the k-NN-based grading methods. In addition to this, we found that using LSA yielded slightly more accurate grading than PLSA and LDA. We also found that the division of the learning materials in the training data is crucial. It is better to divide learning materials into sentences than paragraphs.",
"title": ""
},
{
"docid": "06c839f10b3d561c3a327bb67aa8ec10",
"text": "A great deal of research exists on the neural basis of theory-of-mind (ToM) or mentalizing. Qualitative reviews on this topic have identified a mentalizing network composed of the medial prefrontal cortex, posterior cingulate/precuneus, and bilateral temporal parietal junction. These conclusions, however, are not based on a quantitative and systematic approach. The current review presents a quantitative meta-analysis of neuroimaging studies pertaining to ToM, using the activation-likelihood estimation (ALE) approach. Separate ALE meta-analyses are presented for story-based and nonstory-based studies of ToM. The conjunction of these two meta-analyses reveals a core mentalizing network that includes areas not typically noted by previous reviews. A third ALE meta-analysis was conducted with respect to story comprehension in order to examine the relation between ToM and stories. Story processing overlapped with many regions of the core mentalizing network, and these shared regions bear some resemblance to a network implicated by a number of other processes.",
"title": ""
},
{
"docid": "cbf5c00229e9ac591183f4877006cf2b",
"text": "OBJECTIVE\nTo statistically analyze the long-term results of alar base reduction after rhinoplasty.\n\n\nMETHODS\nAmong a consecutive series of 100 rhinoplasty cases, 19 patients required alar base reduction. The mean (SD) follow-up time was 11 (9) months (range, 2 months to 3 years). Using preoperative and postoperative photographs, comparisons were made of the change in the base width (width of base between left and right alar-facial junctions), flare width (width on base view between points of widest alar flare), base height (distance from base to nasal tip on base view), nostril height (distance from base to anterior edge of nostril), and vertical flare (vertical distance from base to the widest alar flare). Notching at the nasal sill was recorded as none, minimal, mild, moderate, and severe.\n\n\nRESULTS\nChanges in vertical flare (P<.05) and nostril height (P<.05) were the only significant differences seen in the patients who required alar reduction. No significant change was seen in base width (P=.92), flare width (P=.41), or base height (P=.22). No notching was noted.\n\n\nCONCLUSIONS\nIt would have been preferable to study patients undergoing alar reduction without concomitant rhinoplasty procedures, but this approach is not practical. To our knowledge, the present study represents the most extensive attempt in the literature to characterize and quantify the postoperative effects of alar base reduction.",
"title": ""
},
{
"docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21",
"text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.",
"title": ""
},
{
"docid": "0cd5813a069c8955871784cd3e63aa83",
"text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.",
"title": ""
},
{
"docid": "08fee0a21076c8a1d65eb7fc0f88610f",
"text": "We propose Smells Phishy?, a board game that contributes to raising users' awareness of online phishing scams. We designed and developed the board game and conducted user testing with 21 participants. The results showed that after playing the game, participants had better understanding of phishing scams and learnt how to better protect themselves. Participants enjoyed playing the game and said that it was a fun and exciting experience. The game increased knowledge and awareness, and encouraged discussion.",
"title": ""
},
{
"docid": "8b1276b7d74230748bdb60930dbc45a5",
"text": "The debate continues around transconjunctival versus transcutaneous approaches. Despite the perceived safety of the former, many experienced surgeons continue to advocate the latter. This review aims to present a balanced view of each approach. It will first address the anatomic basis of lower lid aging and then organize recent literature and associated discussion into the transconjunctival and transcutaneous approaches. The integrated algorithm employed by the senior author will be presented. Finally this review will describe less mainstream suture techniques for lower lid rejuvenation and lower lid blepharoplasty complications with a focus upon lower lid malposition.",
"title": ""
},
{
"docid": "49ef68eabca989e07f420a3a88386c77",
"text": "Identifying the language used will typically be the first step in most natural language processing tasks. Among the wide variety of language identification methods discussed in the literature, the ones employing the Cavnar and Trenkle (1994) approach to text categorization based on character n-gram frequencies have been particularly successful. This paper presents the R extension package textcat for n-gram based text categorization which implements both the Cavnar and Trenkle approach as well as a reduced n-gram approach designed to remove redundancies of the original approach. A multi-lingual corpus obtained from the Wikipedia pages available on a selection of topics is used to illustrate the functionality of the package and the performance of the provided language identification methods.",
"title": ""
},
{
"docid": "e75df6ff31c9840712cf1a4d7f6582cd",
"text": "Endotoxin, a constituent of Gram-negative bacteria, stimulates macrophages to release large quantities of tumor necrosis factor (TNF) and interleukin-1 (IL-1), which can precipitate tissue injury and lethal shock (endotoxemia). Antagonists of TNF and IL-1 have shown limited efficacy in clinical trials, possibly because these cytokines are early mediators in pathogenesis. Here a potential late mediator of lethality is identified and characterized in a mouse model. High mobility group-1 (HMG-1) protein was found to be released by cultured macrophages more than 8 hours after stimulation with endotoxin, TNF, or IL-1. Mice showed increased serum levels of HMG-1 from 8 to 32 hours after endotoxin exposure. Delayed administration of antibodies to HMG-1 attenuated endotoxin lethality in mice, and administration of HMG-1 itself was lethal. Septic patients who succumbed to infection had increased serum HMG-1 levels, suggesting that this protein warrants investigation as a therapeutic target.",
"title": ""
},
{
"docid": "3f68334f7f315921390d385ad45d8aaf",
"text": "UNLABELLED\nAcarbose is an α-glucosidase inhibitor produced by Actinoplanes sp. SE50/110 that is medically important due to its application in the treatment of type2 diabetes. In this work, a comprehensive proteome analysis of Actinoplanes sp. SE50/110 was carried out to determine the location of proteins of the acarbose (acb) and the putative pyochelin (pch) biosynthesis gene cluster. Therefore, a comprehensive state-of-the-art proteomics approach combining subcellular fractionation, shotgun proteomics and spectral counting to assess the relative abundance of proteins within fractions was applied. The analysis of four different proteome fractions (cytosolic, enriched membrane, membrane shaving and extracellular fraction) resulted in the identification of 1582 of the 8270 predicted proteins. All 22 Acb-proteins and 21 of the 23 Pch-proteins were detected. Predicted membrane-associated, integral membrane or extracellular proteins of the pch and the acb gene cluster were found among the most abundant proteins in corresponding fractions. Intracellular biosynthetic proteins of both gene clusters were not only detected in the cytosolic, but also in the enriched membrane fraction, indicating that the biosynthesis of acarbose and putative pyochelin metabolites takes place at the inner membrane.\n\n\nBIOLOGICAL SIGNIFICANCE\nActinoplanes sp. SE50/110 is a natural producer of the α-glucosidase inhibitor acarbose, a bacterial secondary metabolite that is used as a drug for the treatment of type 2 diabetes, a disease which is a global pandemic that currently affects 387 million people and accounts for 11% of worldwide healthcare expenditures (www.idf.org). The work presented here is the first comprehensive investigation of protein localization and abundance in Actinoplanes sp. SE50/110 and provides an extensive source of information for the selection of genes for future mutational analysis and other hypothesis driven experiments. The conclusion that acarbose or pyochelin family siderophores are synthesized at the inner side of the cytoplasmic membrane determined from this work, indicates that studying corresponding intermediates will be challenging. In addition to previous studies on the genome and transcriptome, the work presented here demonstrates that the next omic level, the proteome, is now accessible for detailed physiological analysis of Actinoplanes sp. SE50/110, as well as mutants derived from this and related species.",
"title": ""
},
{
"docid": "86af81e39bce547a3f29b4851d033356",
"text": "Empirical studies largely support the continuity hypothesis of dreaming. Despite of previous research efforts, the exact formulation of the continuity hypothesis remains vague. The present paper focuses on two aspects: (1) the differential incorporation rate of different waking-life activities and (2) the magnitude of which interindividual differences in waking-life activities are reflected in corresponding differences in dream content. Using a correlational design, a positive, non-zero correlation coefficient will support the continuity hypothesis. Although many researchers stress the importance of emotional involvement on the incorporation rate of waking-life experiences into dreams, formulated the hypothesis that highly focused cognitive processes such as reading, writing, etc. are rarely found in dreams due to the cholinergic activation of the brain during dreaming. The present findings based on dream diaries and the exact measurement of waking activities replicated two recent questionnaire studies. These findings indicate that it will be necessary to specify the continuity hypothesis more fully and include factors (e.g., type of waking-life experience, emotional involvement) which modulate the incorporation rate of waking-life experiences into dreams. Whether the cholinergic state of the brain during REM sleep or other alterations of brain physiology (e.g., down-regulation of the dorsolateral prefrontal cortex) are the underlying factors of the rare occurrence of highly focused cognitive processes in dreaming remains an open question. Although continuity between waking life and dreaming has been demonstrated, i.e., interindividual differences in the amount of time spent with specific waking-life activities are reflected in dream content, methodological issues (averaging over a two-week period, small number of dreams) have limited the capacity for detecting substantial relationships in all areas. Nevertheless, it might be concluded that the continuity hypothesis in its present general form is not valid and should be elaborated and tested in a more specific way.",
"title": ""
},
{
"docid": "1e320f6c5ce9240f580aeb32a47619a1",
"text": "The human gut is populated with as many as 100 trillion cells, whose collective genome, the microbiome, is a reflection of evolutionary selection pressures acting at the level of the host and at the level of the microbial cell. The ecological rules that govern the shape of microbial diversity in the gut apply to mutualists and pathogens alike.",
"title": ""
},
{
"docid": "cb641fc639b86abadec4f85efc226c14",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "15d3618efa3413456c6aebf474b18c92",
"text": "The aim of this paper is to elucidate the implications of quantum computing in present cryptography and to introduce the reader to basic post-quantum algorithms. In particular the reader can delve into the following subjects: present cryptographic schemes (symmetric and asymmetric), differences between quantum and classical computing, challenges in quantum computing, quantum algorithms (Shor’s and Grover’s), public key encryption schemes affected, symmetric schemes affected, the impact on hash functions, and post quantum cryptography. Specifically, the section of Post-Quantum Cryptography deals with different quantum key distribution methods and mathematicalbased solutions, such as the BB84 protocol, lattice-based cryptography, multivariate-based cryptography, hash-based signatures and code-based cryptography. Keywords—quantum computers; post-quantum cryptography; Shor’s algorithm; Grover’s algorithm; asymmetric cryptography; symmetric cryptography",
"title": ""
},
{
"docid": "57d6a2056453baf04aae577e4a2c048a",
"text": "0950-7051/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.knosys.2011.07.017 ⇑ Corresponding author at: Shenzhen Institutes of A Academy of Sciences, Shenzhen 518055, China. E-mail addresses: [email protected], zy.zhao10@ @siat.ac.cn (S. Feng), [email protected] (Q. (J.Z. Huang), [email protected] (G.J. W Fan) . Community detection is an important issue in social network analysis. Most existing methods detect communities through analyzing the linkage of the network. The drawback is that each community identified by those methods can only reflect the strength of connections, but it cannot reflect the semantics such as the interesting topics shared by people. To address this problem, we propose a topic oriented community detection approach which combines both social objects clustering and link analysis. We first use a subspace clustering algorithm to group all the social objects into topics. Then we divide the members that are involved in those social objects into topical clusters, each corresponding to a distinct topic. In order to differentiate the strength of connections, we perform a link analysis on each topical cluster to detect the topical communities. Experiments on real data sets have shown that our approach was able to identify more meaningful communities. The quantitative evaluation indicated that our approach can achieve a better performance when the topics are at least as important as the links to the analysis. 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "2b2c30fa2dc19ef7c16cf951a3805242",
"text": "A standard approach to estimating online click-based metrics of a ranking function is to run it in a controlled experiment on live users. While reliable and popular in practice, configuring and running an online experiment is cumbersome and time-intensive. In this work, inspired by recent successes of offline evaluation techniques for recommender systems, we study an alternative that uses historical search log to reliably predict online click-based metrics of a \\emph{new} ranking function, without actually running it on live users. To tackle novel challenges encountered in Web search, variations of the basic techniques are proposed. The first is to take advantage of diversified behavior of a search engine over a long period of time to simulate randomized data collection, so that our approach can be used at very low cost. The second is to replace exact matching (of recommended items in previous work) by \\emph{fuzzy} matching (of search result pages) to increase data efficiency, via a better trade-off of bias and variance. Extensive experimental results based on large-scale real search data from a major commercial search engine in the US market demonstrate our approach is promising and has potential for wide use in Web search.",
"title": ""
}
] | scidocsrr |
dc7825dc7a3d9da17b5958af4df5afda | Achieving Flexible and Self-Contained Data Protection in Cloud Computing | [
{
"docid": "347c3929efc37dee3230189e576f14ab",
"text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.",
"title": ""
}
] | [
{
"docid": "288bf12e3949a568b1f7f0aad1f2d365",
"text": "Process mining can be seen as the “missing link” between data mining and business process management. The lion0s share of process mining research has been devoted to the discovery of procedural process models from event logs. However, often there are predefined constraints that (partially) describe the normative or expected process, e.g., “activity A should be followed by B” or “activities A and B should never be both executed”. A collection of such constraints is called a declarative process model. Although it is possible to discover such models based on event data, this paper focuses on aligning event logs and predefined declarative process models. Discrepancies between log and model are mediated such that observed log traces are related to paths in the model. The resulting alignments provide sophisticated diagnostics that pinpoint where deviations occur and how severe they are. Moreover, selected parts of the declarative process model can be used to clean and repair the event log before applying other process mining techniques. Our alignment-based approach for preprocessing and conformance checking using declarative process models has been implemented in ProM and has been evaluated using both synthetic logs and real-life logs from a Dutch hospital. & 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "c5b050d6fff4e5ce4d4d79c10625e33f",
"text": "Quadratic differentials naturally define analytic orientation fields on planar surfaces. We propose to model orientation fields of fingerprints by specifying quadratic differentials. Models for all fingerprint classes such as arches, loops and whorls are laid out. These models are parametrised by few, geometrically interpretable parameters which are invariant under Euclidean motions. We demonstrate their ability in adapting to given, observed orientation fields, and we compare them to existing models using the fingerprint images of the NIST Special Database 4. We also illustrate that these model allow for extrapolation into unobserved regions. This goes beyond the scope of earlier models for the orientation field as those are restricted to the observed planar fingerprint region. Within the framework of quadratic differentials we are able to verify analytically Penrose's formula for the singularities on a palm [L. S. Penrose, \"Dermatoglyphics\"' Scientific American, vol. 221, no.~6, pp. 73--84, 1969]. Potential applications of these models are the use of their parameters as indices of large fingerprint databases, as well as the definition of intrinsic coordinates for single fingerprint images.",
"title": ""
},
{
"docid": "f20391d5eb79b32f06d31d27ad51bb6c",
"text": "Fanconi anemia (FA) is a recessively inherited disease characterized by multiple symptoms including growth retardation, skeletal abnormalities, and bone marrow failure. The FA diagnosis is complicated due to the fact that the clinical manifestations are both diverse and variable. A chromosomal breakage test using a DNA cross-linking agent, in which cells from an FA patient typically exhibit an extraordinarily sensitive response, has been considered the gold standard for the ultimate diagnosis of FA. In the majority of FA patients the test results are unambiguous, although in some cases the presence of hematopoietic mosaicism may complicate interpretation of the data. However, some diagnostic overlap with other syndromes has previously been noted in cases with Nijmegen breakage syndrome. Here we present results showing that misdiagnosis may also occur with patients suffering from two of the three currently known cohesinopathies, that is, Roberts syndrome (RBS) and Warsaw breakage syndrome (WABS). This complication may be avoided by scoring metaphase chromosomes-in addition to chromosomal breakage-for spontaneously occurring premature centromere division, which is characteristic for RBS and WABS, but not for FA.",
"title": ""
},
{
"docid": "893c7a1694596d0c8d58b819500ff9f9",
"text": "A recently introduced deep neural network (DNN) has achieved some unprecedented gains in many challenging automatic speech recognition (ASR) tasks. In this paper deep neural network hidden Markov model (DNN-HMM) acoustic models is introduced to phonotactic language recognition and outperforms artificial neural network hidden Markov model (ANN-HMM) and Gaussian mixture model hidden Markov model (GMM-HMM) acoustic model. Experimental results have confirmed that phonotactic language recognition system using DNN-HMM acoustic model yields relative equal error rate reduction of 28.42%, 14.06%, 18.70% and 12.55%, 7.20%, 2.47% for 30s, 10s, 3s comparing with the ANN-HMM and GMM-HMM approaches respectively on National Institute of Standards and Technology language recognition evaluation (NIST LRE) 2009 tasks.",
"title": ""
},
{
"docid": "a441f01dae68134b419aa33f1f9588a6",
"text": "In this work we present a technique for using natural language to help reinforcement learning generalize to unseen environments using neural machine translation techniques. These techniques are then integrated into policy shaping to make it more effective at learning in unseen environments. We evaluate this technique using the popular arcade game, Frogger, and show that our modified policy shaping algorithm improves over a Q-learning agent as well as a baseline version of policy shaping.",
"title": ""
},
{
"docid": "12915285ce8f1dd1f902562fd8c7500d",
"text": "Expanding view of minimal invasive surgery horizon reveals new practice areas for surgeons and patients. Laparoscopic inguinal hernia repair is an example in progress wondered by many patients and surgeons. Advantages in laparoscopic repair motivate surgeons to discover this popular field. In addition, patients search the most convenient surgical method for themselves today. Laparoscopic approaches to inguinal hernia surgery have become popular as a result of the development of experience about different laparoscopic interventions, and these techniques are increasingly used these days. As other laparoscopic surgical methods, experience is the most important point in order to obtain good results. This chapter aims to show technical details, pitfalls and the literature results about two methods that are commonly used in laparoscopic inguinal hernia repair.",
"title": ""
},
{
"docid": "0a34ed8b01c6c700e7bb8bb15644590f",
"text": "Almost all automatic semantic role labeling (SRL) systems rely on a preliminary parsing step that derives a syntactic structure from the sentence being analyzed. This makes the choice of syntactic representation an essential design decision. In this paper, we study the influence of syntactic representation on the performance of SRL systems. Specifically, we compare constituent-based and dependencybased representations for SRL of English in the FrameNet paradigm. Contrary to previous claims, our results demonstrate that the systems based on dependencies perform roughly as well as those based on constituents: For the argument classification task, dependencybased systems perform slightly higher on average, while the opposite holds for the argument identification task. This is remarkable because dependency parsers are still in their infancy while constituent parsing is more mature. Furthermore, the results show that dependency-based semantic role classifiers rely less on lexicalized features, which makes them more robust to domain changes and makes them learn more efficiently with respect to the amount of training data.",
"title": ""
},
{
"docid": "352c61af854ffc6dab438e7a1be56fcb",
"text": "Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children’s cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.",
"title": ""
},
{
"docid": "18b32aa0ffd8a3a7b84f9768d57b5cde",
"text": "In this paper we propose a recognition system of medical concepts from free text clinical reports. Our approach tries to recognize also concepts which are named with local terminology, with medical writing scripts, short words, abbreviations and even spelling mistakes. We consider a clinical terminology ontology (Snomed-CT), as a dictionary of concepts. In a first step we obtain an embedding model using word2vec methodology from a big corpus database of clinical reports. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space, and so the geometrical similarity can be considered a measure of semantic relation. We have considered 615513 emergency clinical reports from the Hospital \"Rafael Méndez\" in Lorca, Murcia. In these reports there are a lot of local language of the emergency domain, medical writing scripts, short words, abbreviations and even spelling mistakes. With the model obtained we represent the words and sentences as vectors, and by applying cosine similarity we identify which concepts of the ontology are named in the text. Finally, we represent the clinical reports (EHR) like a bag of concepts, and use this representation to search similar documents. The paper illustrates 1) how we build the word2vec model from the free text clinical reports, 2) How we extend the embedding from words to sentences, and 3) how we use the cosine similarity to identify concepts. The experimentation, and expert human validation, shows that: a) the concepts named in the text with the ontology terminology are well recognized, and b) others concepts that are not named with the ontology terminology are also recognized, obtaining a high precision and recall measures.",
"title": ""
},
{
"docid": "4080a61019e992a89b9120de611ee844",
"text": "An emotional version of Sapir-Whorf hypothesis suggests that differences in language emotionalities influence differences among cultures no less than conceptual differences. Conceptual contents of languages and cultures to significant extent are determined by words and their semantic differences; these could be borrowed among languages and exchanged among cultures. Emotional differences, as suggested in the paper, are related to grammar and mostly cannot be borrowed. Conceptual and emotional mechanisms of languages are considered here along with their functions in the mind and cultural evolution. A fundamental contradiction in human mind is considered: language evolution requires reduced emotionality, but “too low” emotionality makes language “irrelevant to life,” disconnected from sensory-motor experience. Neural mechanisms of these processes are suggested as well as their mathematical models: the knowledge instinct, the language instinct, the dual model connecting language and cognition, dynamic logic, neural modeling fields. Mathematical results are related to cognitive science, linguistics, and psychology. Experimental evidence and theoretical arguments are discussed. Approximate equations for evolution of human minds and cultures are obtained. Their solutions identify three types of cultures: \"conceptual\"-pragmatic cultures, in which emotionality of language is reduced and differentiation overtakes synthesis resulting in fast evolution at the price of uncertainty of values, self doubts, and internal crises; “traditional-emotional” cultures where differentiation lags behind synthesis, resulting in cultural stability at the price of stagnation; and “multi-cultural” societies combining fast cultural evolution and stability. Unsolved problems and future theoretical and experimental directions are discussed.",
"title": ""
},
{
"docid": "a62dc7e25b050addad1c27d92deee8b7",
"text": "Potentially dangerous cryptography errors are well-documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable, however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs. We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security benefits – reducing the decision space, as expected, prevents choice of insecure parameters – simplicity is not enough. Poor documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the availability of comprehensive documentation and easy-to-use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions, however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.",
"title": ""
},
{
"docid": "2b8d90c11568bb8b172eca20a48fd712",
"text": "INTRODUCTION\nCancer incidence and mortality estimates for 25 cancers are presented for the 40 countries in the four United Nations-defined areas of Europe and for the European Union (EU-27) for 2012.\n\n\nMETHODS\nWe used statistical models to estimate national incidence and mortality rates in 2012 from recently-published data, predicting incidence and mortality rates for the year 2012 from recent trends, wherever possible. The estimated rates in 2012 were applied to the corresponding population estimates to obtain the estimated numbers of new cancer cases and deaths in Europe in 2012.\n\n\nRESULTS\nThere were an estimated 3.45 million new cases of cancer (excluding non-melanoma skin cancer) and 1.75 million deaths from cancer in Europe in 2012. The most common cancer sites were cancers of the female breast (464,000 cases), followed by colorectal (447,000), prostate (417,000) and lung (410,000). These four cancers represent half of the overall burden of cancer in Europe. The most common causes of death from cancer were cancers of the lung (353,000 deaths), colorectal (215,000), breast (131,000) and stomach (107,000). In the European Union, the estimated numbers of new cases of cancer were approximately 1.4 million in males and 1.2 million in females, and around 707,000 men and 555,000 women died from cancer in the same year.\n\n\nCONCLUSION\nThese up-to-date estimates of the cancer burden in Europe alongside the description of the varying distribution of common cancers at both the regional and country level provide a basis for establishing priorities to cancer control actions in Europe. The important role of cancer registries in disease surveillance and in planning and evaluating national cancer plans is becoming increasingly recognised, but needs to be further advocated. The estimates and software tools for further analysis (EUCAN 2012) are available online as part of the European Cancer Observatory (ECO) (http://eco.iarc.fr).",
"title": ""
},
{
"docid": "fcd98a7540dd59e74ea71b589c255adb",
"text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.",
"title": ""
},
{
"docid": "4a75586965854ba2cba2fed18528e72b",
"text": "Although there have been some promising results in computer lipreading, there has been a paucity of data on which to train automatic systems. However the recent emergence of the TCDTIMIT corpus, with around 6000 words, 59 speakers and seven hours of recorded audio-visual speech, allows the deployment of more recent techniques in audio-speech such as Deep Neural Networks (DNNs) and sequence discriminative training. In this paper we combine the DNN with a Hidden Markov Model (HMM) to the, so called, hybrid DNN-HMM configuration which we train using a variety of sequence discriminative training methods. This is then followed with a weighted finite state transducer. The conclusion is that the DNN offers very substantial improvement over a conventional classifier which uses a Gaussian Mixture Model (GMM) to model the densities even when optimised with Speaker Adaptive Training. Sequence adaptive training offers further improvements depending on the precise variety employed but those improvements are of the order of 10% improvement in word accuracy. Putting these two results together implies that lipreading is moving from something of rather esoteric interest to becoming a practical reality in the foreseeable future.",
"title": ""
},
{
"docid": "7438ff346fa26661822a3a96c13c6d6e",
"text": "As in any new technology adoption in organizations, big data solutions (BDS) also presents some security threat and challenges, especially due to the characteristics of big data itself the volume, velocity and variety of data. Even though many security considerations associated to the adoption of BDS have been publicized, it remains unclear whether these publicized facts have any actual impact on the adoption of the solutions. Hence, it is the intent of this research-in-progress to examine the security determinants by focusing on the influence that various technological factors in security, organizational security view and security related environmental factors have on BDS adoption. One technology adoption framework, the TOE (technological-organizational-environmental) framework is adopted as the main conceptual research framework. This research will be conducted using a Sequential Explanatory Mixed Method approach. Quantitative method will be used for the first part of the research, specifically using an online questionnaire survey. The result of this first quantitative process will then be further explored and complemented with a case study. Results generated from both quantitative and qualitative phases will then be triangulated and a cross-study synthesis will be conducted to form the final result and discussion.",
"title": ""
},
{
"docid": "c675a2f1fed4ccb5708be895190b02cd",
"text": "Decompilation is important for many security applications; it facilitates the tedious task of manual malware reverse engineering and enables the use of source-based security tools on binary code. This includes tools to find vulnerabilities, discover bugs, and perform taint tracking. Recovering high-level control constructs is essential for decompilation in order to produce structured code that is suitable for human analysts and sourcebased program analysis techniques. State-of-the-art decompilers rely on structural analysis, a pattern-matching approach over the control flow graph, to recover control constructs from binary code. Whenever no match is found, they generate goto statements and thus produce unstructured decompiled output. Those statements are problematic because they make decompiled code harder to understand and less suitable for program analysis. In this paper, we present DREAM, the first decompiler to offer a goto-free output. DREAM uses a novel patternindependent control-flow structuring algorithm that can recover all control constructs in binary programs and produce structured decompiled code without any goto statement. We also present semantics-preserving transformations that can transform unstructured control flow graphs into structured graphs. We demonstrate the correctness of our algorithms and show that we outperform both the leading industry and academic decompilers: Hex-Rays and Phoenix. We use the GNU coreutils suite of utilities as a benchmark. Apart from reducing the number of goto statements to zero, DREAM also produced more compact code (less lines of code) for 72.7% of decompiled functions compared to Hex-Rays and 98.8% compared to Phoenix. We also present a comparison of Hex-Rays and DREAM when decompiling three samples from Cridex, ZeusP2P, and SpyEye malware families.",
"title": ""
},
{
"docid": "e0f7c82754694084c6d05a2d37be3048",
"text": "Introducing variability while maintaining coherence is a core task in learning to generate utterances in conversation. Standard neural encoder-decoder models and their extensions using conditional variational autoencoder often result in either trivial or digressive responses. To overcome this, we explore a novel approach that injects variability into neural encoder-decoder via the use of external memory as a mixture model, namely Variational Memory Encoder-Decoder (VMED). By associating each memory read with a mode in the latent mixture distribution at each timestep, our model can capture the variability observed in sequential data such as natural conversations. We empirically compare the proposed model against other recent approaches on various conversational datasets. The results show that VMED consistently achieves significant improvement over others in both metricbased and qualitative evaluations.",
"title": ""
},
{
"docid": "1a732de3138d5771bea1590bb36f4db6",
"text": "Implanted sensors and actuators in the human body promise in-situ health monitoring and rapid advancements in personalized medicine. We propose a new paradigm where such implants may communicate wirelessly through a technique called as galvanic coupling, which uses weak electrical signals and the conduction properties of body tissues. While galvanic coupling overcomes the problem of massive absorption of RF waves in the body, the unique intra-body channel raises several questions on the topology of the implants and the external (i.e., on skin) data collection nodes. This paper makes the first contributions towards (i) building an energy-efficient topology through optimal placement of data collection points/relays using measurement-driven tissue channel models, and (ii) balancing the energy consumption over the entire implant network so that the application needs are met. We achieve this via a two-phase iterative clustering algorithm for the implants and formulate an optimization problem that decides the position of external data-gathering points. Our theoretical results are validated via simulations and experimental studies on real tissues, with demonstrated increase in the network lifetime.",
"title": ""
},
{
"docid": "d2d84d12216464e361f417c397212e63",
"text": "Academic search engines and digital libraries provide convenient online search and access facilities for scientific publications. However, most existing systems do not include books in their collections although several books are freely available online. Academic books are different from papers in terms of their length, contents and structure. We argue that accounting for academic books is important in understanding and assessing scientific impact. We introduce an open-book search engine that extracts and indexes metadata, contents, and bibliography from online PDF book documents. To the best of our knowledge, no previous work gives a systematical study on building a search engine for books.\n We propose a hybrid approach for extracting title and authors from a book that combines results from CiteSeer, a rule based extractor, and a SVM based extractor, leveraging web knowledge. For \"table of contents\" recognition, we propose rules based on multiple regularities based on numbering and ordering. In addition, we study bibliography extraction and citation parsing for a large dataset of books. Finally, we use the multiple fields available in books to rank books in response to search queries. Our system can effectively extract metadata and contents from large collections of online books and provides efficient book search and retrieval facilities.",
"title": ""
}
] | scidocsrr |
d7cc6a11815526daa38bb207ae0bc575 | Emotional disorders: cluster 4 of the proposed meta-structure for DSM-V and ICD-11. | [
{
"docid": "32fbccbe3b8795c0d2e2934acbdfcc06",
"text": "Epidemiologic studies indicate that children exposed to early adverse experiences are at increased risk for the development of depression, anxiety disorders, or both. Persistent sensitization of central nervous system (CNS) circuits as a consequence of early life stress, which are integrally involved in the regulation of stress and emotion, may represent the underlying biological substrate of an increased vulnerability to subsequent stress as well as to the development of depression and anxiety. A number of preclinical studies suggest that early life stress induces long-lived hyper(re)activity of corticotropin-releasing factor (CRF) systems as well as alterations in other neurotransmitter systems, resulting in increased stress responsiveness. Many of the findings from these preclinical studies are comparable to findings in adult patients with mood and anxiety disorders. Emerging evidence from clinical studies suggests that exposure to early life stress is associated with neurobiological changes in children and adults, which may underlie the increased risk of psychopathology. Current research is focused on strategies to prevent or reverse the detrimental effects of early life stress on the CNS. The identification of the neurobiological substrates of early adverse experience is of paramount importance for the development of novel treatments for children, adolescents, and adults.",
"title": ""
}
] | [
{
"docid": "83cfa05fc29b4eb4eb7b954ba53498f5",
"text": "Smartphones, the devices we carry everywhere with us, are being heavily tracked and have undoubtedly become a major threat to our privacy. As “Tracking the trackers” has become a necessity, various static and dynamic analysis tools have been developed in the past. However, today, we still lack suitable tools to detect, measure and compare the ongoing tracking across mobile OSs. To this end, we propose MobileAppScrutinator, based on a simple yet efficient dynamic analysis approach, that works on both Android and iOS (the two most popular OSs today). To demonstrate the current trend in tracking, we select 140 most representative Apps available on both Android and iOS AppStores and test them with MobileAppScrutinator. In fact, choosing the same set of apps on both Android and iOS also enables us to compare the ongoing tracking on these two OSs. Finally, we also discuss the effectiveness of privacy safeguards available on Android and iOS. We show that neither Android nor iOS privacy safeguards in their present state are completely satisfying.",
"title": ""
},
{
"docid": "2477e41b180e29112e9d10cecd021034",
"text": "OBJECTIVE\nResearch in both animals and humans indicates that cannabidiol (CBD) has antipsychotic properties. The authors assessed the safety and effectiveness of CBD in patients with schizophrenia.\n\n\nMETHOD\nIn an exploratory double-blind parallel-group trial, patients with schizophrenia were randomized in a 1:1 ratio to receive CBD (1000 mg/day; N=43) or placebo (N=45) alongside their existing antipsychotic medication. Participants were assessed before and after treatment using the Positive and Negative Syndrome Scale (PANSS), the Brief Assessment of Cognition in Schizophrenia (BACS), the Global Assessment of Functioning scale (GAF), and the improvement and severity scales of the Clinical Global Impressions Scale (CGI-I and CGI-S).\n\n\nRESULTS\nAfter 6 weeks of treatment, compared with the placebo group, the CBD group had lower levels of positive psychotic symptoms (PANSS: treatment difference=-1.4, 95% CI=-2.5, -0.2) and were more likely to have been rated as improved (CGI-I: treatment difference=-0.5, 95% CI=-0.8, -0.1) and as not severely unwell (CGI-S: treatment difference=-0.3, 95% CI=-0.5, 0.0) by the treating clinician. Patients who received CBD also showed greater improvements that fell short of statistical significance in cognitive performance (BACS: treatment difference=1.31, 95% CI=-0.10, 2.72) and in overall functioning (GAF: treatment difference=3.0, 95% CI=-0.4, 6.4). CBD was well tolerated, and rates of adverse events were similar between the CBD and placebo groups.\n\n\nCONCLUSIONS\nThese findings suggest that CBD has beneficial effects in patients with schizophrenia. As CBD's effects do not appear to depend on dopamine receptor antagonism, this agent may represent a new class of treatment for the disorder.",
"title": ""
},
{
"docid": "55928e118303b080d49a399da1f9dba3",
"text": "This paper describes a customized database and a comprehensive set of queries that can be used for systematic benchmarking of relational database systems. Designing this database and a set of carefully tuned benchmarks represents a first attempt in developing a scientific methodology for performance evaluation of database management systems. We have used this database to perform a comparative evaluation of the database machine DIRECT, the \"university\" and \"commercial\" versions of the INGRES database system, the relational database system ORACLE, and the IDM 500 database machine. We present a subset of our measurements (for the single user case only), that constitute a preliminary performance evaluation of these systems.",
"title": ""
},
{
"docid": "63d26f3336960c1d92afbd3a61a9168c",
"text": "The location-based social networks have been becoming flourishing in recent years. In this paper, we aim to estimate the similarity between users according to their physical location histories (represented by GPS trajectories). This similarity can be regarded as a potential social tie between users, thereby enabling friend and location recommendations. Different from previous work using social structures or directly matching users’ physical locations, this approach model a user’s GPS trajectories with a semantic location history (SLH), e.g., shopping malls ? restaurants ? cinemas. Then, we measure the similarity between different users’ SLHs by using our maximal travel match (MTM) algorithm. The advantage of our approach lies in two aspects. First, SLH carries more semantic meanings of a user’s interests beyond low-level geographic positions. Second, our approach can estimate the similarity between two users without overlaps in the geographic spaces, e.g., people living in different cities. When matching SLHs, we consider the sequential property, the granularity and the popularity of semantic locations. We evaluate our method based on a realworld GPS dataset collected by 109 users in a period of 1 year. The results show that SLH outperforms a physicallocation-based approach and MTM is more effective than several widely used sequence matching approaches given this application scenario.",
"title": ""
},
{
"docid": "9c17dad32d130072b1d26b21b8c97896",
"text": "A novel planar inverted-F antenna (PIFA) is designed in this paper. Compared to the previous PIFA, the proposed PIFA can enhance bandwidths and achieve multi-band which is loaded with a T-shaped ground plane and etched slots on ground plane and a rectangular patch. It covered 4 service bands, including GSM900, DCS1800, PCS1900 and ISM2450 under the criteria -7 dB return loss for the first band and -10 dB for the last bands. Process of designing and calculation of parameters are presented in detail. The simulation results showed that each band has good characteristics and the bandwidth has been greatly expanded.",
"title": ""
},
{
"docid": "01f741144e6304915a6d086165bfe17d",
"text": "The standardization and performance testing of analysis tools is a prerequisite to widespread adoption of genome-wide sequencing, particularly in the clinic. However, performance testing is currently complicated by the paucity of standards and comparison metrics, as well as by the heterogeneity in sequencing platforms, applications and protocols. Here we present the genome comparison and analytic testing (GCAT) platform to facilitate development of performance metrics and comparisons of analysis tools across these metrics. Performance is reported through interactive visualizations of benchmark and performance testing data, with support for data slicing and filtering. The platform is freely accessible at http://www.bioplanet.com/gcat.",
"title": ""
},
{
"docid": "0dd4f05f9bd3d582b9fb9c64f00ed697",
"text": "Today, among other challenges, teaching students how to write computer programs for the first time can be an important criterion for whether students in computing will remain in their program of study, i.e. Computer Science or Information Technology. Not learning to program a computer as a computer scientist or information technologist can be compared to a mathematician not learning algebra. For a mathematician this would be an extremely limiting situation. For a computer scientist, not learning to program imposes a similar severe limitation on the budding computer scientist. Therefore it is not a question as to whether programming should be taught rather it is a question of how to maximize aspects of teaching programming so that students are less likely to be discouraged when learning to program. Different criteria have been used to select first programming languages. Computer scientists have attempted to establish criteria for selecting the first programming language to teach a student. This paper examines the criteria used to select first programming languages and the issues that novices face when learning to program in an effort to create a more comprehensive model for selecting first programming languages.",
"title": ""
},
{
"docid": "ade9860157680b2ca6820042f0cda302",
"text": "This chapter has two main objectives: to review influential ideas and findings in the literature and to outline the organization and content of the volume. The first part of the chapter lays a conceptual and empirical foundation for other chapters in the volume. Specifically, the chapter defines and distinguishes the key concepts of prejudice, stereotypes, and discrimination, highlighting how bias can occur at individual, institutional, and cultural levels. We also review different theoretical perspectives on these phenomena, including individual differences, social cognition, functional relations between groups, and identity concerns. We offer a broad overview of the field, charting how this area has developed over previous decades and identify emerging trends and future directions. The second part of the chapter focuses specifically on the coverage of the area in the present volume. It explains the organization of the book and presents a brief synopsis of the chapters in the volume. Throughout psychology’s history, researchers have evinced strong interest in understanding prejudice, stereotyping, and discrimination (Brewer & Brown, 1998; Dovidio, 2001; Duckitt, 1992; Fiske, 1998), as well as the phenomenon of intergroup bias more generally (Hewstone, Rubin, & Willis, 2002). Intergroup bias generally refers to the systematic tendency to evaluate one’s own membership group (the ingroup) or its members more favorably than a non-membership group (the outgroup) or its members. These topics have a long history in the disciplines of anthropology and sociology (e.g., Sumner, 1906). However, social psychologists, building on the solid foundations of Gordon Allport’s (1954) masterly volume, The Nature of Prejudice, have developed a systematic and more nuanced analysis of bias and its associated phenomena. Interest in prejudice, stereotyping, and discrimination is currently shared by allied disciplines such as sociology and political science, and emerging disciplines such as neuroscience. The practical implications of this 4 OVERVIEW OF THE TOPIC large body of research are widely recognized in the law (Baldus, Woodworth, & Pulaski, 1990; Vidmar, 2003), medicine (Institute of Medicine, 2003), business (e.g., Brief, Dietz, Cohen, et al., 2000), the media, and education (e.g., Ben-Ari & Rich, 1997; Hagendoorn &",
"title": ""
},
{
"docid": "57d0e046517cc669746d4ecda352dc3f",
"text": "This paper is about understanding the nature of bug fixing by analyzing thousands of bug fix transactions of software repositories. It then places this learned knowledge in the context of automated program repair. We give extensive empirical results on the nature of human bug fixes at a large scale and a fine granularity with abstract syntax tree differencing. We set up mathematical reasoning on the search space of automated repair and the time to navigate through it. By applying our method on 14 repositories of Java software and 89,993 versioning transactions, we show that not all probabilistic repair models are equivalent.",
"title": ""
},
{
"docid": "829b910e2c73ee15866fc59de4884200",
"text": "Shared-memory multiprocessors are frequently used as compute servers with multiple parallel applications executing at the same time. In such environments, the efficiency of a parallel application can be significantly affected by the operating system scheduling policy. In this paper, we use detailed simulation studies to evaluate the performance of several different scheduling strategies, These include regular priority scheduling, coscheduling or gang scheduling, process control with processor partitioning, handoff scheduling, and affinity-based scheduling. We also explore tradeoffs between the use of busy-waiting and blocking synchronization primitives and their interactions with the scheduling strategies. Since effective use of caches is essential to achieving high performance, a key focus is on the impact of the scheduling strategies on the caching behavior of the applications.Our results show that in situations where the number of processes exceeds the number of processors, regular priority-based scheduling in conjunction with busy-waiting synchronization primitives results in extremely poor processor utilization. In such situations, use of blocking synchronization primitives can significantly improve performance. Process control and gang scheduling strategies are shown to offer the highest performance, and their performance is relatively independent of the synchronization method used. However, for applications that have sizable working sets that fit into the cache, process control performs better than gang scheduling. For the applications considered, the performance gains due to handoff scheduling and processor affinity are shown to be small.",
"title": ""
},
{
"docid": "dfa51004b99bce29e644fbcca4b833a5",
"text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.",
"title": ""
},
{
"docid": "e742aa091dae6227994cffcdb5165769",
"text": "In this paper, a new adaptive multi-batch experience replay scheme is proposed for proximal policy optimization (PPO) for continuous action control. On the contrary to original PPO, the proposed scheme uses the batch samples of past policies as well as the current policy for the update for the next policy, where the number of the used past batches is adaptively determined based on the oldness of the past batches measured by the average importance sampling (IS) weight. The new algorithm constructed by combining PPO with the proposed multi-batch experience replay scheme maintains the advantages of original PPO such as random minibatch sampling and small bias due to low IS weights by storing the pre-computed advantages and values and adaptively determining the mini-batch size. Numerical results show that the proposed method significantly increases the speed and stability of convergence on various continuous control tasks compared to original PPO.",
"title": ""
},
{
"docid": "27381c67ea64e84846fb3ed156304288",
"text": "The mapping of lab tests to the Laboratory Test Code controlled terminology in CDISC-SDTM § can be a challenge. One has to find candidates in the extensive controlled terminology list. Then there can be multiple lab tests that map to a single SDTM controlled term. This means additional variables must be used in order to produce a unique test definition (e.g. LBCAT, LBSPEC, LBMETHOD and/or LBELTM). Finally, it can occur that a controlled term is not available and a code needs to be defined in agreement with the rules for Lab tests. This paper describes my experience with the implementation of SDTM controlled terminology for lab tests during an SDTM conversion activity. In six clinical studies 124 lab tests were mapped to 101 SDTM controlled terms. The lab tests included routine lab parameters, coagulation parameters, hormones, glucose tolerance test and pregnancy test. INTRODUCTION This paper aims to give detailed examples of SDTM LB datasets that were created for six studies included in an FDA submission. Background information on the conversion project that formed the context of this work can be found in an earlier PhUSE contribution [1]. With the exception of part of the hormone data all laboratory data of these studies had been extracted from the Oracle Clinical TM NORMLAB2 system, which delivered complete and standardized lab data, i.e. standardized parameter (lab test) names, values, units and ranges. Subsequently, these NORMLAB2 extracts had been enriched with derived variables and records, following internal data standards and conventions, to form standardized analysis-ready datasets. These were the basis for conversion to SDTM LB datasets. The combined source datasets of the six studies held 124 distinct lab tests, which were mapped to 101 distinct lab controlled terms. Controlled terminology for lab tests is part of the SDTM terminology, which is published on the NCI EVS website [2]. New lab test terms have been released for public review through a series of packages [3], starting in 2007. Since version 3.1.2. of the SDTM Implementation Guide [4], the use of SDTM controlled terminology for lab tests is assumed for LBTESTCD and LBTEST (codelists C65047 and C67154). Table 1 provides an overview of the number of lab tests per study in the source data vs. the SDTM datasets (i.e. the number of LBTEST/LBTESTCD codes) and shows how these codes were distributed across different lab test categories. A set of 22 ‘routine safety parameters’ occurred in all four phase III studies (001-004), with 16 tests occurring in all six studies. § Clinical Data Interchange Standards Consortium Study Data Tabulation Model δ National Cancer Institute Enterprise Vocabulary Services",
"title": ""
},
{
"docid": "a7c9d58c49f1802b94395c6f12c2d6dd",
"text": "Signature-based network intrusion detection systems (NIDSs) have been widely deployed in current network security infrastructure. However, these detection systems suffer from some limitations such as network packet overload, expensive signature matching and massive false alarms in a large-scale network environment. In this paper, we aim to develop an enhanced filter mechanism (named EFM) to comprehensively mitigate these issues, which consists of three major components: a context-aware blacklist-based packet filter, an exclusive signature matching component and a KNN-based false alarm filter. The experiments, which were conducted with two data sets and in a network environment, demonstrate that our proposed EFM can overall enhance the performance of a signaturebased NIDS such as Snort in the aspects of packet filtration, signature matching improvement and false alarm reduction without affecting network security. a 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2d04a311815c8fef8728e4a992d3efac",
"text": "The amidase activities of two Aminobacter sp. strains (DSM24754 and DSM24755) towards the aryl-substituted substrates phenylhydantoin, indolylmethyl hydantoin, D,L-6-phenyl-5,6-dihydrouracil (PheDU) and para-chloro-D,L-6-phenyl-5,6-dihydrouracil were compared. Both strains showed hydantoinase and dihydropyrimidinase activity by hydrolyzing all substrates to the corresponding N-carbamoyl-α- or N-carbamoyl-β-amino acids. However, carbamoylase activity and thus a further degradation of these products to α- and β-amino acids was not detected. Additionally, the genes coding for a dihydropyrimidinase and a carbamoylase of Aminobacter sp. DSM24754 were elucidated. For Aminobacter sp. DSM24755 a dihydropyrimidinase gene flanked by two genes coding for putative ABC transporter proteins was detected. The deduced amino acid sequences of both dihydropyrimidinases are highly similar to the well-studied dihydropyrimidinase of Sinorhizobium meliloti CECT4114. The latter enzyme is reported to accept substituted hydantoins and dihydropyrimidines as substrates. The deduced amino acid sequence of the carbamoylase gene shows a high similarity to the very thermostable enzyme of Pseudomonas sp. KNK003A.",
"title": ""
},
{
"docid": "062f6ecc9d26310de82572f500cb5f05",
"text": "The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.",
"title": ""
},
{
"docid": "05049ac85552c32f2c98d7249a038522",
"text": "Remote sensing tools are increasingly being used to survey forest structure. Most current methods rely on GPS signals, which are available in above-canopy surveys or in below-canopy surveys of open forests, but may be absent in below-canopy environments of dense forests. We trialled a technology that facilitates mobile surveys in GPS-denied below-canopy forest environments. The platform consists of a battery-powered UAV mounted with a LiDAR. It lacks a GPS or any other localisation device. The vehicle is capable of an 8 min flight duration and autonomous operation but was remotely piloted in the present study. We flew the UAV around a 20 m × 20 m patch of roadside trees and developed postprocessing software to estimate the diameter-at-breast-height (DBH) of 12 trees that were detected by the LiDAR. The method detected 73% of trees greater than 200 mm DBH within 3 m of the flight path. Smaller and more distant trees could not be detected reliably. The UAV-based DBH estimates of detected trees were positively correlated with the humanbased estimates (R = 0.45, p = 0.017) with a median absolute error of 18.1%, a root-meansquare error of 25.1% and a bias of −1.2%. We summarise the main current limitations of this technology and outline potential solutions. The greatest gains in precision could be achieved through use of a localisation device. The long-term factor limiting the deployment of below-canopy UAV surveys is likely to be battery technology.",
"title": ""
},
{
"docid": "a6f2cee851d2c22d471f473caf1710a1",
"text": "One of the main reasons why Byzantine fault-tolerant (BFT) systems are currently not widely used lies in their high resource consumption: <inline-formula><tex-math notation=\"LaTeX\">$3f+1$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq1-2495213.gif\"/></alternatives></inline-formula> replicas are required to tolerate only <inline-formula><tex-math notation=\"LaTeX\">$f$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq2-2495213.gif\"/></alternatives></inline-formula> faults. Recent works have been able to reduce the minimum number of replicas to <inline-formula><tex-math notation=\"LaTeX\">$2f+1$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq3-2495213.gif\"/></alternatives></inline-formula> by relying on trusted subsystems that prevent a faulty replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, during normal-case operation these systems still use more resources than actually necessary to make progress in the absence of faults. This paper presents <italic>Resource-efficient Byzantine Fault Tolerance</italic> (<sc>ReBFT</sc>), an approach that minimizes the resource usage of a BFT system during normal-case operation by keeping <inline-formula> <tex-math notation=\"LaTeX\">$f$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"distler-ieq4-2495213.gif\"/> </alternatives></inline-formula> replicas in a passive mode. In contrast to active replicas, passive replicas neither participate in the agreement protocol nor execute client requests; instead, they are brought up to speed by verified state updates provided by active replicas. In case of suspected or detected faults, passive replicas are activated in a consistent manner. To underline the flexibility of our approach, we apply <sc>ReBFT</sc> to two existing BFT systems: PBFT and MinBFT.",
"title": ""
},
{
"docid": "40dc2dc28dca47137b973757cdf3bf34",
"text": "In this paper we propose a new word-order based graph representation for text. In our graph representation vertices represent words or phrases and edges represent relations between contiguous words or phrases. The graph representation also includes dependency information. Our text representation is suitable for applications involving the identification of relevance or paraphrases across texts, where word-order information would be useful. We show that this word-order based graph representation performs better than a dependency tree representation while identifying the relevance of one piece of text to another.",
"title": ""
},
{
"docid": "58d7e76a4b960e33fc7b541d04825dc9",
"text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.",
"title": ""
}
] | scidocsrr |
885084d8bfceb6c2ec9ab84e86f3b502 | Online Controlled Experiments and A / B Tests | [
{
"docid": "c2c056ae22c22e2a87b9eca39d125cc2",
"text": "The web provides an unprecedented opportunity to evaluate ideas quickly using controlled experiments, also called randomized experiments, A/B tests (and their generalizations), split tests, Control/Treatment tests, MultiVariable Tests (MVT) and parallel flights. Controlled experiments embody the best scientific design for establishing a causal relationship between changes and their influence on user-observable behavior. We provide a practical guide to conducting online experiments, where end-users can help guide the development of features. Our experience indicates that significant learning and return-on-investment (ROI) are seen when development teams listen to their customers, not to the Highest Paid Person’s Opinion (HiPPO). We provide several examples of controlled experiments with surprising results. We review the important ingredients of running controlled experiments, and discuss their limitations (both technical and organizational). We focus on several areas that are critical to experimentation, including statistical power, sample size, and techniques for variance reduction. We describe common architectures for experimentation systems and analyze their advantages and disadvantages. We evaluate randomization and hashing techniques, which we show are not as simple in practice as is often assumed. Controlled experiments typically generate large amounts of data, which can be analyzed using data mining techniques to gain deeper understanding of the factors influencing the outcome of interest, leading to new hypotheses and creating a virtuous cycle of improvements. Organizations that embrace controlled experiments with clear evaluation criteria can evolve their systems with automated optimizations and real-time analyses. Based on our extensive practical experience with multiple systems and organizations, we share key lessons that will help practitioners in running trustworthy controlled experiments.",
"title": ""
}
] | [
{
"docid": "8da8ecae2ae9f49135dd3480992069f0",
"text": "In this paper, we investigate the use of decentralized blockchain mechanisms for delivering transparent, secure, reliable, and timely energy flexibility, under the form of adaptation of energy demand profiles of Distributed Energy Prosumers, to all the stakeholders involved in the flexibility markets (Distribution System Operators primarily, retailers, aggregators, etc.). In our approach, a blockchain based distributed ledger stores in a tamper proof manner the energy prosumption information collected from Internet of Things smart metering devices, while self-enforcing smart contracts programmatically define the expected energy flexibility at the level of each prosumer, the associated rewards or penalties, and the rules for balancing the energy demand with the energy production at grid level. Consensus based validation will be used for demand response programs validation and to activate the appropriate financial settlement for the flexibility providers. The approach was validated using a prototype implemented in an Ethereum platform using energy consumption and production traces of several buildings from literature data sets. The results show that our blockchain based distributed demand side management can be used for matching energy demand and production at smart grid level, the demand response signal being followed with high accuracy, while the amount of energy flexibility needed for convergence is reduced.",
"title": ""
},
{
"docid": "528e16d5e3c4f5e7edc77d8e5960ba4f",
"text": "Nowadays, a large amount of documents is generated daily. These documents may contain some spelling errors which should be detected and corrected by using a proofreading tool. Therefore, the existence of automatic writing assistance tools such as spell-checkers/correctors could help to improve their quality. Spelling errors could be categorized into five categories. One of them is real-word errors, which are misspelled words that have been wrongly converted into another word in the language. Detection of such errors requires discourse analysis rather than just checking the word in a dictionary. We propose a discourse-aware discriminative model to improve the results of context-sensitive spell-checkers by reranking their resulted n-best list. We augment the proposed reranker into two existing context-sensitive spell-checker systems; one of them is based on statistical machine translation and the other one is based on language model. We choose the keywords of the whole document as contextual features of the model and improve the results of both systems by employing the features in a log-linear reranker system. We evaluated the system on two different languages: English and Persian. The results of the experiments in English language on the Wall street journal test set show improvements of 4.5% and 5.2% in detection and correction recall, respectively, in comparison to the baseline method. The mentioned improvement on recall metric was achieved with comparable precision. We also achieve state-of-the-art performance on the Persian language. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "94784bc9f04dbe5b83c2a9f02e005825",
"text": "The optical code division multiple access (OCDMA), the most advanced multiple access technology in optical communication has become significant and gaining popularity because of its asynchronous access capability, faster speed, efficiency, security and unlimited bandwidth. Many codes are developed in spectral amplitude coding optical code division multiple access (SAC-OCDMA) with zero or minimum cross-correlation properties to reduce the multiple access interference (MAI) and Phase Induced Intensity Noise (PIIN). This paper compares two novel SAC-OCDMA codes in terms of their performances such as bit error rate (BER), number of active users that is accommodated with minimum cross-correlation property, high data rate that is achievable and the minimum power that the OCDMA system supports to achieve a minimum BER value. One of the proposed novel codes referred in this work as modified random diagonal code (MRDC) possesses cross-correlation between zero to one and the second novel code referred in this work as modified new zero cross-correlation code (MNZCC) possesses cross-correlation zero to further minimize the multiple access interference, which are found to be more scalable compared to the other existing SAC-OCDMA codes. In this work, the proposed MRDC and MNZCC codes are implemented in an optical system using the optisystem version-12 software for the SAC-OCDMA scheme. Simulation results depict that the OCDMA system based on the proposed novel MNZCC code exhibits better performance compared to the MRDC code and former existing SAC-OCDMA codes. The proposed MNZCC code accommodates maximum number of simultaneous users with higher data rate transmission, lower BER and longer traveling distance without any signal quality degradation as compared to the former existing SAC-OCDMA codes.",
"title": ""
},
{
"docid": "b414ed7d896bff259dc975bf16777fa7",
"text": "We propose in this work a general procedure to efficient EM-based design of single-layer SIW interconnects, including their transitions to microstrip lines. Our starting point is developed by exploiting available empirical knowledge for SIW. We propose an efficient SIW surrogate model for direct EM design optimization in two stages: first optimizing the SIW width to achieve the specified low cutoff frequency, followed by the transition optimization to reduce reflections and extend the dominant mode bandwidth. Our procedure is illustrated by designing a SIW interconnect on a standard FR4-based substrate.",
"title": ""
},
{
"docid": "fe70c7614c0414347ff3c8bce7da47e7",
"text": "We explore a model of stress prediction in Russian using a combination of local contextual features and linguisticallymotivated features associated with the word’s stem and suffix. We frame this as a ranking problem, where the objective is to rank the pronunciation with the correct stress above those with incorrect stress. We train our models using a simple Maximum Entropy ranking framework allowing for efficient prediction. An empirical evaluation shows that a model combining the local contextual features and the linguistically-motivated non-local features performs best in identifying both primary and secondary stress.",
"title": ""
},
{
"docid": "cd0f0c4e323a70596320cfa40178d469",
"text": "In this paper we propose a novel, passive approach for detecting and tracking malicious flux service networks. Our detection system is based on passive analysis of recursive DNS (RDNS) traffic traces collected from multiple large networks. Contrary to previous work, our approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, our approach is able to detect malicious flux service networks in-the-wild, i.e., as they are accessed by users who fall victims of malicious content advertised through blog spam, instant messaging spam, social website spam, etc., beside email spam. We experiment with the RDNS traffic passively collected at two large ISP networks. Overall, our sensors monitored more than 2.5 billion DNS queries per day from millions of distinct source IPs for a period of 45 days. Our experimental results show that the proposed approach is able to accurately detect malicious flux service networks. Furthermore, we show how our passive detection and tracking of malicious flux service networks may benefit spam filtering applications.",
"title": ""
},
{
"docid": "629b63889e43ee1fce3c6c850342428e",
"text": "Purpose – This paper aims to survey the web sites of the academic libraries of the Association of Research Libraries (USA) regarding the adoption of Web 2.0 technologies. Design/methodology/approach – The websites of 100 member academic libraries of the Association of Research Libraries (USA) were surveyed. Findings – All libraries were found to be using various tools of Web 2.0. Blogs, microblogs, RSS, instant messaging, social networking sites, mashups, podcasts, and vodcasts were widely adopted, while wikis, photo sharing, presentation sharing, virtual worlds, customized webpage and vertical search engines were used less. Libraries were using these tools for sharing news, marketing their services, providing information literacy instruction, providing information about print and digital resources, and soliciting feedback of users. Originality/value – The paper is useful for future planning of Web 2.0 use in academic libraries.",
"title": ""
},
{
"docid": "3d93c45e2374a7545c6dff7de0714352",
"text": "Building an interest model is the key to realize personalized text recommendation. Previous interest models neglect the fact that a user may have multiple angles of interest. Different angles of interest provide different requests and criteria for text recommendation. This paper proposes an interest model that consists of two kinds of angles: persistence and pattern, which can be combined to form complex angles. The model uses a new method to represent the long-term interest and the short-term interest, and distinguishes the interest in object and the interest in the link structure of objects. Experiments with news-scale text data show that the interest in object and the interest in link structure have real requirements, and it is effective to recommend texts according to the angles. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "51adc790a11769186958d08179f81ed6",
"text": "Background: Breast cancer is a major public health problem globally. The ongoing epidemiological, socio-cultural\nand demographic transition by accentuating the associated risk factors has disproportionately increased the incidence\nof breast cancer cases and resulting mortality in developing countries like India. Early diagnosis with rapid initiation\nof treatment reduces breast cancer mortality. Therefore awareness of breast cancer risk and a willingness to undergo\nscreening are essential. The objective of the present study was to assess the knowledge and practices relating to screening\nfor breast cancer among women in Delhi. Methods: Data were obtained from 222 adult women using a pretested selfadministered\nquestionnaire. Results: Rates for knowledge of known risk factors of breast cancer were: family history\nof breast cancer, 59.5%; smoking, 57.7%; old age, 56.3%; lack of physical exercise, 51.9%; lack of breastfeeding,\n48.2%; late menopause, 37.4%; and early menarche, 34.7%. Women who were aged < 30 and those who were unmarried\nregistered significantly higher knowledge scores (p ≤ 0.01). Breast self-examination (BSE) was regularly practiced\nat-least once a month by 41.4% of the participants. Some 48% knew mammography has a role in the early detection\nof breast cancer. Since almost three-fourths of the participants believed BSE could help in early diagnosis of breast\ncancer, which is not supported by evidence, future studies should explore the consequences of promoting BSE at the\npotential expense of screening mammography. Conclusion: Our findings highlight the need for awareness generation\namong adult women regarding risk factors and methods for early detection of breast cancer.",
"title": ""
},
{
"docid": "93c24024349853033a60ce06aa2b700e",
"text": "Mines deployed in post-war countries pose severe threats to civilians and hamper the reconstruction effort in war hit societies. In the scope of the EU FP7 TIRAMISU Project, a toolbox for humanitarian demining missions is being developed by the consortium members. In this article we present the FSR Husky, an affordable, lightweight and autonomous all terrain robotic system, developed to assist human demining operation teams. Intended to be easily deployable on the field, our robotic solution has the ultimate goal of keeping humans away from the threat, safeguarding their lives. A detailed description of the modular robotic system architecture is presented, and several real world experiments are carried out to validate the robot’s functionalities and illustrate continuous work in progress on minefield coverage, mine detection, outdoor localization, navigation, and environment perception. © 2015 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4aee0c91e48b9a34be4591d36103c622",
"text": "We construct a polyhedron that is topologically convex (i.e., has the graph of a convex polyhedron) yet has no vertex unfolding: no matter how we cut along the edges and keep faces attached at vertices to form a connected (hinged) surface, the surface necessarily unfolds with overlap.",
"title": ""
},
{
"docid": "c56d09b3c08f2cb9cc94ace3733b1c54",
"text": "In this paper, we describe our microblog realtime filtering system developed and submitted for the Text Retrieval Conference (TREC 2015) microblog track. We submitted six runs for two tasks related to real-time filtering by using various Information Retrieval (IR), and Machine Learning (ML) techniques to analyze the Twitter sample live stream and match relevant tweets corresponding to specific user interest profiles. Evaluation results demonstrate the effectiveness of our approach as we achieved 3 of the top 7 best scores among automatic submissions across all participants and obtained the best (or close to best) scores in more than 25% of the evaluated topics for the real-time mobile push notification task.",
"title": ""
},
{
"docid": "396f0c39b5afbf6bee2f7168f23ecccb",
"text": "This work describes a method for real-time motion detection using an active camera mounted on a padtilt platform. Image mapping is used to align images of different viewpoints so that static camera motion detection can be applied. In the presence of camera position noise, the image mapping is inexact and compensation techniques fail. The use of morphological filtering of motion images is explored to desensitize the detection algorithm to inaccuracies in background compensation. Two motion detection techniques are examined, and experiments to verify the methods are presented. The system successfully extracts moving edges from dynamic images even when the pankilt angles between successive frames are as large as 3\".",
"title": ""
},
{
"docid": "e3739a934ecd7b99f2d35a19f2aed5cf",
"text": "We consider distributed algorithms for solving dynamic programming problems whereby several processors participate simultaneously in the computation while maintaining coordination by information exchange via communication links. A model of asynchronous distributed computation is developed which requires very weak assumptions on the ordering of computations, the timing of information exchange, the amount of local information needed at each computation node, and the initial conditions for the algorithm. The class of problems considered is very broad and includes shortest path problems, and finite and infinite horizon stochastic optimal control problems. When specialized to a shortest path problem the algorithm reduces to the algorithm originally implemented for routing of messages in the ARPANET.",
"title": ""
},
{
"docid": "4f3177b303b559f341b7917683114257",
"text": "We investigate the integration of a planning mechanism into sequence-to-sequence models using attention. We develop a model which can plan ahead in the future when it computes its alignments between input and output sequences, constructing a matrix of proposed future alignments and a commitment vector that governs whether to follow or recompute the plan. This mechanism is inspired by the recently proposed strategic attentive reader and writer (STRAW) model for Reinforcement Learning. Our proposed model is end-to-end trainable using primarily differentiable operations. We show that it outperforms a strong baseline on character-level translation tasks from WMT’15, the algorithmic task of finding Eulerian circuits of graphs, and question generation from the text. Our analysis demonstrates that the model computes qualitatively intuitive alignments, converges faster than the baselines, and achieves superior performance with fewer parameters.",
"title": ""
},
{
"docid": "cb8ffb03187583308eb8409d75a54172",
"text": "Active Traffic Management (ATM) systems have been introduced by transportation agencies to manage recurrent and non-recurrent congestion. ATM systems rely on the interconnectivity of components made possible by wired and/or wireless networks. Unfortunately, this connectivity that supports ATM systems also provides potential system access points that results in vulnerability to cyberattacks. This is becoming more pronounced as ATM systems begin to integrate internet of things (IoT) devices. Hence, there is a need to rigorously evaluate ATM systems for cyberattack vulnerabilities, and explore design concepts that provide stability and graceful degradation in the face of cyberattacks. In this research, a prototype ATM system along with a real-time cyberattack monitoring system were developed for a 1.5-mile section of I-66 in Northern Virginia. The monitoring system detects deviation from expected operation of an ATM system by comparing lane control states generated by the ATM system with lane control states deemed most likely by the monitoring system. This comparison provides the functionality to continuously monitor the system for abnormalities that would result from a cyberattack. In case of any deviation between two sets of states, the monitoring system displays the lane control states generated by the back-up data source. In a simulation experiment, the prototype ATM system and cyberattack monitoring system were subject to emulated cyberattacks. The evaluation results showed that the ATM system, when operating properly in the absence of attacks, improved average vehicle speed in the system to 60mph (a 13% increase compared to the baseline case without ATM). However, when subject to cyberattack, the mean speed reduced by 15% compared to the case with the ATM system and was similar to the baseline case. This illustrates that the effectiveness of the ATM system was negated by cyberattacks. The monitoring system however, allowed the ATM system to revert to an expected state with a mean speed of 59mph and reduced the negative impact of cyberattacks. These results illustrate the need to revisit ATM system design concepts as a means to protect against cyberattacks in addition to traditional system intrusion prevention approaches.",
"title": ""
},
{
"docid": "9c507a2b1f57750d1b4ffeed6979a06f",
"text": "Once considered provocative, the notion that the wisdom of the crowd is superior to any individual has become itself a piece of crowd wisdom, leading to speculation that online voting may soon put credentialed experts out of business. Recent applications include political and economic forecasting, evaluating nuclear safety, public policy, the quality of chemical probes, and possible responses to a restless volcano. Algorithms for extracting wisdom from the crowd are typically based on a democratic voting procedure. They are simple to apply and preserve the independence of personal judgment. However, democratic methods have serious limitations. They are biased for shallow, lowest common denominator information, at the expense of novel or specialized knowledge that is not widely shared. Adjustments based on measuring confidence do not solve this problem reliably. Here we propose the following alternative to a democratic vote: select the answer that is more popular than people predict. We show that this principle yields the best answer under reasonable assumptions about voter behaviour, while the standard ‘most popular’ or ‘most confident’ principles fail under exactly those same assumptions. Like traditional voting, the principle accepts unique problems, such as panel decisions about scientific or artistic merit, and legal or historical disputes. The potential application domain is thus broader than that covered by machine learning and psychometric methods, which require data across multiple questions.",
"title": ""
},
{
"docid": "640ba15172b56373b3a6bdfe9f5f6cd4",
"text": "This work considers the problem of learning cooperative policies in complex, partially observable domains without explicit communication. We extend three classes of single-agent deep reinforcement learning algorithms based on policy gradient, temporal-difference error, and actor-critic methods to cooperative multi-agent systems. To effectively scale these algorithms beyond a trivial number of agents, we combine them with a multi-agent variant of curriculum learning. The algorithms are benchmarked on a suite of cooperative control tasks, including tasks with discrete and continuous actions, as well as tasks with dozens of cooperating agents. We report the performance of the algorithms using different neural architectures, training procedures, and reward structures. We show that policy gradient methods tend to outperform both temporal-difference and actor-critic methods and that curriculum learning is vital to scaling reinforcement learning algorithms in complex multiagent domains.",
"title": ""
},
{
"docid": "04cdcf2234ffaafbd24eb20fb584cf5d",
"text": "Grice (1957) drew a famous distinction between natural(N) and non-natural(NN) meaning, where what is meant(NN) is broadly equivalent to what is intentionally communicated. This paper argues that Grice’s dichotomy overlooks the fact that spontaneously occurring natural signs may be intentionally shown , and hence used in intentional communication. It also argues that some naturally occurring behaviours have a signalling function, and that the existence of such natural codes provides further evidence that Grice’s original distinction was not exhaustive. The question of what kind of information, in cognitive terms, these signals encode is also examined.",
"title": ""
},
{
"docid": "e7bf372840efea55c632afd96840212d",
"text": "The purpose of this systematic analysis of nursing simulation literature between 2000 -2007 was to determine how learning theory was used to design and assess learning that occurs in simulations. Out of the 120 articles in which designing nursing simulations was reported, 16 referenced learning or developmental theory as the basis of how and why they set up the simulation. Of the 16 articles that used a learning type of foundation, only two considered learning as a cognitive task. More research is needed that investigates the efficacy of simulation for improving student learning. The study concludes that most nursing faculty approach simulation from a teaching paradigm rather than a learning paradigm. For simulation to foster student learning there must be a fundamental shift from a teaching paradigm to a learning paradigm and a foundational learning theory to design and evaluate simulation should be used. Examples of how to match simulation with learning theory are included.",
"title": ""
}
] | scidocsrr |
ad11557e120de6ea0d14b61f7169719b | Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation | [
{
"docid": "6298ab25b566616b0f3c1f6ee8889d19",
"text": "This paper addresses the problem of free gaze estimation under unrestricted head motion. More precisely, unlike previous approaches that mainly focus on estimating gaze towards a small planar screen, we propose a method to estimate the gaze direction in the 3D space. In this context the paper makes the following contributions: (i) leveraging on Kinect device, we propose a multimodal method that rely on depth sensing to obtain robust and accurate head pose tracking even under large head pose, and on the visual data to obtain the remaining eye-in-head gaze directional information from the eye image; (ii) a rectification scheme of the image that exploits the 3D mesh tracking, allowing to conduct a head pose free eye-in-head gaze directional estimation; (iii) a simple way of collecting ground truth data thanks to the Kinect device. Results on three users demonstrate the great potential of our approach.",
"title": ""
}
] | [
{
"docid": "1f355bd6b46e16c025ba72aa9250c61d",
"text": "Whole-cell biosensors have several advantages for the detection of biological substances and have proven to be useful analytical tools. However, several hurdles have limited whole-cell biosensor application in the clinic, primarily their unreliable operation in complex media and low signal-to-noise ratio. We report that bacterial biosensors with genetically encoded digital amplifying genetic switches can detect clinically relevant biomarkers in human urine and serum. These bactosensors perform signal digitization and amplification, multiplexed signal processing with the use of Boolean logic gates, and data storage. In addition, we provide a framework with which to quantify whole-cell biosensor robustness in clinical samples together with a method for easily reprogramming the sensor module for distinct medical detection agendas. Last, we demonstrate that bactosensors can be used to detect pathological glycosuria in urine from diabetic patients. These next-generation whole-cell biosensors with improved computing and amplification capacity could meet clinical requirements and should enable new approaches for medical diagnosis.",
"title": ""
},
{
"docid": "36da2b6102762c80b3ae8068d764e220",
"text": "Video games have become an essential part of the way people play and learn. While an increasing number of people are using games to learn in informal environments, their acceptance in the classroom as an instructional activity has been mixed. Successes in informal learning have caused supporters to falsely believe that implementing them into the classroom would be a relatively easy transition and have the potential to revolutionise the entire educational system. In spite of all the hype, many are puzzled as to why more teachers have not yet incorporated them into their teaching. The literature is littered with reports that point to a variety of reasons. One of the reasons, we believe, is that very little has been done to convince teachers that the effort to change their curriculum to integrate video games and other forms of technology is worthy of the effort. Not until policy makers realise the importance of professional British Journal of Educational Technology (2009) doi:10.1111/j.1467-8535.2009.01007.x © 2009 The Authors. Journal compilation © 2009 Becta. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA. development and training as an important use of funds will positive changes in thinking and perceptions come about, which will allow these various forms of technology to reach their potential. The authors have hypothesised that the major impediments to useful technology integration include the general lack of institutional infrastructure, poor teacher training, and overly-complicated technologies. Overcoming these obstacles requires both a top-down and a bottom-up approach. This paper presents the results of a pilot study with a group of preservice teachers to determine whether our hypotheses regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. The results of this study are discussed along with suggestions for further research and potential changes in teacher training programmes. Introduction Over the past 40 years, video games have become an increasingly popular way to play and learn. Those who play regularly often note that the major attraction is their ability to become quickly engaged and immersed in gameplay (Lenhart & Kayne, 2008). Many have taken notice of video games’ apparent effectiveness in teaching social interaction and critical thinking in informal learning environments. Beliefs about the effectiveness of video games in informal learning situations have been hyped to the extent that they are often described as the ‘holy grail’ that will revolutionise our entire educational system (Gee, 2003; Kirkley & Kirkley, 2004; Prensky, 2001; Sawyer, 2002). In spite of all the hype and promotion, many educators express puzzlement and disappointment that only a modest number of teachers have incorporated video games into their teaching (Egenfeldt-Nielsen, 2004; Pivec & Pivec, 2008). These results seem to mirror those reported on a general lack of successful integration on the part of teachers and educators of new technologies and media in general. The reasons reported in that research point to a varied and complex issue that involves dispelling preconceived notions, prejudices, and concerns (Kati, 2008; Kim & Baylor, 2008). It is our position that very little has been done to date to overcome these objections. We agree with Magliaro and Ezeife (2007) who posited that teachers can and do greatly influence the successes or failures of classroom interventions. Expenditures on media and technology alone do not guarantee their successful or productive use in the classroom. Policy makers need to realise that professional development and training is the most significant use of funds that will positively affect teaching styles and that will allow technology to reach its potential to change education. But as Cuban, Kirkpatrick and Peck (2001) noted, the practices of policy makers and administrators to increase the effective use of technologies in the classroom more often than not conflict with implementation. In their qualitative study of two Silicon Valley high schools, the authors found that despite ready access to computer technologies, 2 British Journal of Educational Technology © 2009 The Authors. Journal compilation © 2009 Becta. only a handful of teachers actually changed their teaching practices (ie, moved from teacher-centered to student-centered pedagogies). Furthermore, the authors identified several barriers to technological innovation in the classroom, including most notably: a lack of preparation time, poor technical support, outdated technologies, and the inability to sustain interest in the particular lessons and a lack of opportunities for collaboration due to the rigid structure and short time periods allocated to instruction. The authors concluded by suggesting that the path for integrating technology would eventually flourish, but that it initially would be riddled with problems caused by impediments placed upon its success by a lack of institutional infrastructure, poor training, and overly-complicated technologies. We agree with those who suggest that any proposed classroom intervention correlates directly to the expectations and perceived value/benefit on the part of the integrating teachers, who largely control what and how their students learn (Hanusheck, Kain & Rivkin, 1998). Faced with these significant obstacles, it should not be surprising that video games, like other technologies, have been less than successful in transforming the classroom. We further suggest that overcoming these obstacles requires both a top-down and a bottom-up approach. Policy makers carry the burden of correcting the infrastructural issues both for practical reasons as well as for creating optimism on the part of teachers to believe that their administrators actually support their decisions. On the other hand, anyone associated with educational systems for any length of time will agree that a top-down only approach is destined for failure. The successful adoption of any new classroom intervention is based, in larger part, on teachers’ investing in the belief that the experience is worth the effort. If a teacher sees little or no value in an intervention, or is unfamiliar with its use, then the chances that it will be properly implemented are minimised. In other words, a teacher’s adoption of any instructional strategy is directly correlated with his or her views, ideas, and expectations about what is possible, feasible, and useful. In their studies into the game playing habits of various college students, Shaffer, Squire and Gee (2005) alluded to the fact that of those that they interviewed, future teachers indicated that they did not play video games as often as those enrolled in other majors. Our review of these comments generated several additional research questions that we believe deserve further investigation. We began to hypothesise that if it were true that teachers, as a group, do not in fact play video games on a regular basis, it should not be surprising that they would have difficulty integrating games into their curriculum. They would not have sufficient basis to integrate the rules of gameplay with their instructional strategies, nor would they be able to make proper assessments as to which games might be the most effective. We understand that one does not have to actually like something or be good at something to appreciate its value. For example, one does not necessarily have to be a fan of rap music or have a knack for performing it to understand that it could be a useful teaching tool. But, on the other hand, we wondered whether the attitudes towards video games on the part of teachers were not merely neutral, but in fact actually negative, which would further undermine any attempts at successfully introducing games into their classrooms. Expectancy-value 3 © 2009 The Authors. Journal compilation © 2009 Becta. This paper presents the results of a pilot study we conducted that utilised a group of preservice teachers to determine whether our hypothesis regarding potential negativity surrounding video games was valid and whether a wider scale study is warranted. In this examination, we utilised a preference survey to ask participants to reveal their impressions and expectancies about video games in general, their playing habits, and their personal assessments as to the potential role games might play in their future teaching strategies. We believe that the results we found are useful in determining ramifications for some potential changes in teacher preparation and professional development programmes. They provide more background on the kinds of learning that can take place, as described by Prensky (2001), Gee (2003) and others, they consider how to evaluate supposed educational games that exist in the market, and they suggest successful integration strategies. Just as no one can assume that digital kids already have expertise in participatory learning simply because they are exposed to these experiences in their informal, outside of school activities, those responsible for teacher training cannot assume that just because up-and-coming teachers have been brought up in the digital age, they are automatically familiar with, disposed to using, and have positive ideas about how games can be integrated into their curriculum. As a case in point, we found that there exists a significant disconnect between teachers and their students regarding the value of gameplay, and whether one can efficiently and effectively learn from games. In this study, we also attempted to determine if there might be an interaction effect based on the type of console being used. We wanted to confirm Pearson and Bailey’s (2008) assertions that the Nintendo Wii (Nintendo Company, Ltd. 11-1 KamitobaHokodate-cho, Minami-ku, Kyoto 601-8501, Japan) consoles would not only promote improvements in physical move",
"title": ""
},
{
"docid": "8e65001ed1e4a3994a95df2626ff4d89",
"text": "The most popular metric distance used in iris code matching is Hamming distance. In this paper, we improve the performance of iris code matching stage by applying adaptive Hamming distance. Proposed method works with Hamming subsets with adaptive length. Based on density of masked bits in the Hamming subset, each subset is able to expand and adjoin to the right or left neighbouring bits. The adaptive behaviour of Hamming subsets increases the accuracy of Hamming distance computation and improves the performance of iris code matching. Results of applying proposed method on Chinese Academy of Science Institute of Automation, CASIA V3.3 shows performance of 99.96% and false rejection rate 0.06.",
"title": ""
},
{
"docid": "868fe4091a136f16f6844e8739b65902",
"text": "This paper uses an ant colony meta-heuristic optimization method to solve the redundancy allocation problem (RAP). The RAP is a well known NP-hard problem which has been the subject of much prior work, generally in a restricted form where each subsystem must consist of identical components in parallel to make computations tractable. Meta-heuristic methods overcome this limitation, and offer a practical way to solve large instances of the relaxed RAP where different components can be placed in parallel. The ant colony method has not yet been used in reliability design, yet it is a method that is expressly designed for combinatorial problems with a neighborhood structure, as in the case of the RAP. An ant colony optimization algorithm for the RAP is devised & tested on a well-known suite of problems from the literature. It is shown that the ant colony method performs with little variability over problem instance or random number seed. It is competitive with the best-known heuristics for redundancy allocation.",
"title": ""
},
{
"docid": "ef3ac22e7d791113d08fd778a79008c3",
"text": "Great efforts have been dedicated to harvesting knowledge bases from online encyclopedias. These knowledge bases play important roles in enabling machines to understand texts. However, most current knowledge bases are in English and non-English knowledge bases, especially Chinese ones, are still very rare. Many previous systems that extract knowledge from online encyclopedias, although are applicable for building a Chinese knowledge base, still suffer from two challenges. The first is that it requires great human efforts to construct an ontology and build a supervised knowledge extraction model. The second is that the update frequency of knowledge bases is very slow. To solve these challenges, we propose a never-ending Chinese Knowledge extraction system, CN-DBpedia, which can automatically generate a knowledge base that is of ever-increasing in size and constantly updated. Specially, we reduce the human costs by reusing the ontology of existing knowledge bases and building an end-to-end facts extraction model. We further propose a smart active update strategy to keep the freshness of our knowledge base with little human costs. The 164 million API calls of the published services justify the success of our system.",
"title": ""
},
{
"docid": "bc4a72d96daf03f861b187fa73f57ff6",
"text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.",
"title": ""
},
{
"docid": "ad80f2e78e80397bd26dac5c0500266c",
"text": "The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the eq norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.",
"title": ""
},
{
"docid": "65a4197d7f12c320a34fdd7fcac556af",
"text": "The article presents an overview of current specialized ontology engineering tools, as well as texts’ annotation tools based on ontologies. The main functions and features of these tools, their advantages and disadvantages are discussed. A systematic comparative analysis of means for engineering ontologies is presented. ACM Classification",
"title": ""
},
{
"docid": "43a7e786704b5347f3b67c08ac9c4f70",
"text": "Before beginning any robot task, users must position the robot's base, a task that now depends entirely on user intuition. While slight perturbation is tolerable for robots with moveable bases, correcting the problem is imperative for fixed- base robots if some essential task sections are out of reach. For mobile manipulation robots, it is necessary to decide on a specific base position before beginning manipulation tasks. This paper presents Reuleaux, an open source library for robot reachability analyses and base placement. It reduces the amount of extra repositioning and removes the manual work of identifying potential base locations. Based on the reachability map, base placement locations of a whole robot or only the arm can be efficiently determined. This can be applied to both statically mounted robots, where the position of the robot and workpiece ensure the maximum amount of work performed, and to mobile robots, where the maximum amount of workable area can be reached. The methods were tested on different robots of different specifications and evaluated for tasks in simulation and real world environment. Evaluation results indicate that Reuleaux had significantly improved performance than prior existing methods in terms of time-efficiency and range of applicability.",
"title": ""
},
{
"docid": "0d25072b941ee3e8690d9bd274623055",
"text": "The task of tracking multiple targets is often addressed with the so-called tracking-by-detection paradigm, where the first step is to obtain a set of target hypotheses for each frame independently. Tracking can then be regarded as solving two separate, but tightly coupled problems. The first is to carry out data association, i.e., to determine the origin of each of the available observations. The second problem is to reconstruct the actual trajectories that describe the spatio-temporal motion pattern of each individual target. The former is inherently a discrete problem, while the latter should intuitively be modeled in continuous space. Having to deal with an unknown number of targets, complex dependencies, and physical constraints, both are challenging tasks on their own and thus most previous work focuses on one of these subproblems. Here, we present a multi-target tracking approach that explicitly models both tasks as minimization of a unified discrete-continuous energy function. Trajectory properties are captured through global label costs, a recent concept from multi-model fitting, which we introduce to tracking. Specifically, label costs describe physical properties of individual tracks, e.g., linear and angular dynamics, or entry and exit points. We further introduce pairwise label costs to describe mutual interactions between targets in order to avoid collisions. By choosing appropriate forms for the individual energy components, powerful discrete optimization techniques can be leveraged to address data association, while the shapes of individual trajectories are updated by gradient-based continuous energy minimization. The proposed method achieves state-of-the-art results on diverse benchmark sequences.",
"title": ""
},
{
"docid": "bdd1c64962bfb921762259cca4a23aff",
"text": "Ever since the emergence of social networking sites (SNSs), it has remained a question without a conclusive answer whether SNSs make people more or less lonely. To achieve a better understanding, researchers need to move beyond studying overall SNS usage. In addition, it is necessary to attend to personal attributes as potential moderators. Given that SNSs provide rich opportunities for social comparison, one highly relevant personality trait would be social comparison orientation (SCO), and yet this personal attribute has been understudied in social media research. Drawing on literature of psychosocial implications of social media use and SCO, this study explored associations between loneliness and various Instagram activities and the role of SCO in this context. A total of 208 undergraduate students attending a U.S. mid-southern university completed a self-report survey (Mage = 19.43, SD = 1.35; 78 percent female; 57 percent White). Findings showed that Instagram interaction and Instagram browsing were both related to lower loneliness, whereas Instagram broadcasting was associated with higher loneliness. SCO moderated the relationship between Instagram use and loneliness such that Instagram interaction was related to lower loneliness only for low SCO users. The results revealed implications for healthy SNS use and the importance of including personality traits and specific SNS use patterns to disentangle the role of SNS use in psychological well-being.",
"title": ""
},
{
"docid": "3072b7d80b0e9afffe6489996eca19aa",
"text": "Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.",
"title": ""
},
{
"docid": "8f1a5420deb75a2b664ceeaae8fc03f9",
"text": "A stretchable and multiple-force-sensitive electronic fabric based on stretchable coaxial sensor electrodes is fabricated for artificial-skin application. This electronic fabric, with only one kind of sensor unit, can simultaneously map and quantify the mechanical stresses induced by normal pressure, lateral strain, and flexion.",
"title": ""
},
{
"docid": "c2fc709aeb4c48a3bd2071b4693d4296",
"text": "Following the success of deep convolutional networks, state-of-the-art methods for 3d human pose estimation have focused on deep end-to-end systems that predict 3d joint locations given raw image pixels. Despite their excellent performance, it is often not easy to understand whether their remaining error stems from a limited 2d pose (visual) understanding, or from a failure to map 2d poses into 3- dimensional positions.,,With the goal of understanding these sources of error, we set out to build a system that given 2d joint locations predicts 3d positions. Much to our surprise, we have found that, with current technology, \"lifting\" ground truth 2d joint locations to 3d space is a task that can be solved with a remarkably low error rate: a relatively simple deep feedforward network outperforms the best reported result by about 30% on Human3.6M, the largest publicly available 3d pose estimation benchmark. Furthermore, training our system on the output of an off-the-shelf state-of-the-art 2d detector (i.e., using images as input) yields state of the art results – this includes an array of systems that have been trained end-to-end specifically for this task. Our results indicate that a large portion of the error of modern deep 3d pose estimation systems stems from their visual analysis, and suggests directions to further advance the state of the art in 3d human pose estimation.",
"title": ""
},
{
"docid": "a17818c54117d502c696abb823ba5a6b",
"text": "The next generation of multimedia services have to be optimized in a personalized way, taking user factors into account for the evaluation of individual experience. Previous works have investigated the influence of user factors mostly in a controlled laboratory environment which often includes a limited number of users and fails to reflect real-life environment. Social media, especially Facebook, provide an interesting alternative for Internet-based subjective evaluation. In this article, we develop (and open-source) a Facebook application, named YouQ1, as an experimental platform for studying individual experience for videos. Our results show that subjective experiments based on YouQ can produce reliable results as compared to a controlled laboratory experiment. Additionally, YouQ has the ability to collect user information automatically from Facebook, which can be used for modeling individual experience.",
"title": ""
},
{
"docid": "5d80fa7763fd815e4e9530bc1a99b5d0",
"text": "This paper introduces a new email dataset, consisting of both single and thread emails, manually annotated with summaries and keywords. A total of 349 emails and threads have been annotated. The dataset is our first step toward developing automatic methods for summarization and keyword extraction from emails. We describe the email corpus, along with the annotation interface, annotator guidelines, and agreement studies.",
"title": ""
},
{
"docid": "9a4dab93461185ea98ccea7733081f73",
"text": "This article discusses two standards operating on principles of cognitive radio in television white space (TV WS) frequencies 802.22and 802.11af. The comparative analysis of these systems will be presented and the similarities as well as the differences among these two perspective standards will be discussed from the point of view of physical (PHY), medium access control (MAC) and cognitive layers.",
"title": ""
},
{
"docid": "569fed958b7a471e06ce718102687a1e",
"text": "The introduction of convolutional layers greatly advanced the performance of neural networks on image tasks due to innately capturing a way of encoding and learning translation-invariant operations, matching one of the underlying symmetries of the image domain. In comparison, there are a number of problems in which there are a number of different inputs which are all ’of the same type’ — multiple particles, multiple agents, multiple stock prices, etc. The corresponding symmetry to this is permutation symmetry, in that the algorithm should not depend on the specific ordering of the input data. We discuss a permutation-invariant neural network layer in analogy to convolutional layers, and show the ability of this architecture to learn to predict the motion of a variable number of interacting hard discs in 2D. In the same way that convolutional layers can generalize to different image sizes, the permutation layer we describe generalizes to different numbers of objects.",
"title": ""
},
{
"docid": "48a0e75b97fdaa734f033c6b7791e81f",
"text": "OBJECTIVE\nTo examine the role of physical activity, inactivity, and dietary patterns on annual weight changes among preadolescents and adolescents, taking growth and development into account.\n\n\nSTUDY DESIGN\nWe studied a cohort of 6149 girls and 4620 boys from all over the United States who were 9 to 14 years old in 1996. All returned questionnaires in the fall of 1996 and a year later in 1997. Each child provided his or her current height and weight and a detailed assessment of typical past-year dietary intakes, physical activities, and recreational inactivities (TV, videos/VCR, and video/computer games).\n\n\nMETHODS\nOur hypotheses were that physical activity and dietary fiber intake are negatively correlated with annual changes in adiposity and that recreational inactivity (TV/videos/games), caloric intake, and dietary fat intake are positively correlated with annual changes in adiposity. Separately for boys and girls, we performed regression analysis of 1-year change in body mass index (BMI; kg/m(2)). All hypothesized factors were in the model simultaneously with several adjustment factors.\n\n\nRESULTS\nLarger increases in BMI from 1996 to 1997 were among girls who reported higher caloric intakes (.0061 +/-.0026 kg/m(2) per 100 kcal/day; beta +/- standard error), less physical activity (-.0284 +/-.0142 kg/m(2)/hour/day) and more time with TV/videos/games (.0372 +/-.0106 kg/m(2)/hour/day) during the year between the 2 BMI assessments. Larger BMI increases were among boys who reported more time with TV/videos/games (.0384 +/-.0101) during the year. For both boys and girls, a larger rise in caloric intake from 1996 to 1997 predicted larger BMI increases (girls:.0059 +/-.0027 kg/m(2) per increase of 100 kcal/day; boys:.0082 +/-.0030). No significant associations were noted for energy-adjusted dietary fat or fiber.\n\n\nCONCLUSIONS\nFor both boys and girls, a 1-year increase in BMI was larger in those who reported more time with TV/videos/games during the year between the 2 BMI measurements, and in those who reported that their caloric intakes increased more from 1 year to the next. Larger year-to-year increases in BMI were also seen among girls who reported higher caloric intakes and less physical activity during the year between the 2 BMI measurements. Although the magnitudes of these estimated effects were small, their cumulative effects, year after year during adolescence, would produce substantial gains in body weight. Strategies to prevent excessive caloric intakes, to decrease time with TV/videos/games, and to increase physical activity would be promising as a means to prevent obesity.",
"title": ""
},
{
"docid": "cf95d41dc5a2bcc31b691c04e3fb8b96",
"text": "Resection of pancreas, in particular pancreaticoduodenectomy, is a complex procedure, commonly performed in appropriately selected patients with benign and malignant disease of the pancreas and periampullary region. Despite significant improvements in the safety and efficacy of pancreatic surgery, pancreaticoenteric anastomosis continues to be the \"Achilles heel\" of pancreaticoduodenectomy, due to its association with a measurable risk of leakage or failure of healing, leading to pancreatic fistula. The morbidity rate after pancreaticoduodenectomy remains high in the range of 30% to 65%, although the mortality has significantly dropped to below 5%. Most of these complications are related to pancreatic fistula, with serious complications of intra-abdominal abscess, postoperative bleeding, and multiorgan failure. Several pharmacological and technical interventions have been suggested to decrease the pancreatic fistula rate, but the results have been controversial. This paper considers definition and classification of pancreatic fistula, risk factors, and preventive approach and offers management strategy when they do occur.",
"title": ""
}
] | scidocsrr |
ced2236fd03478cdab09c79e822799e3 | What recommenders recommend: an analysis of recommendation biases and possible countermeasures | [
{
"docid": "b41a8bbd52a0c6a25cb1a102eb5a2f8b",
"text": "Although the broad social and business success of recommender systems has been achieved across several domains, there is still a long way to go in terms of user satisfaction. One of the key dimensions for significant improvement is the concept of unexpectedness. In this article, we propose a method to improve user satisfaction by generating unexpected recommendations based on the utility theory of economics. In particular, we propose a new concept of unexpectedness as recommending to users those items that depart from what they would expect from the system - the consideration set of each user. We define and formalize the concept of unexpectedness and discuss how it differs from the related notions of novelty, serendipity, and diversity. In addition, we suggest several mechanisms for specifying the users’ expectations and propose specific performance metrics to measure the unexpectedness of recommendation lists. We also take into consideration the quality of recommendations using certain utility functions and present an algorithm for providing users with unexpected recommendations of high quality that are hard to discover but fairly match their interests. Finally, we conduct several experiments on “real-world” datasets and compare our recommendation results with other methods. The proposed approach outperforms these baseline methods in terms of unexpectedness and other important metrics, such as coverage, aggregate diversity and dispersion, while avoiding any accuracy loss.",
"title": ""
},
{
"docid": "e88ad42145c63dd2aeff6c1f64f4b4c7",
"text": "Recommender systems are in the center of network science, and they are becoming increasingly important in individual businesses for providing efficient, personalized services and products to users. Previous research in the field of recommendation systems focused on improving the precision of the system through designing more accurate recommendation lists. Recently, the community has been paying attention to diversity and novelty of recommendation lists as key characteristics of modern recommender systems. In many cases, novelty and precision do not go hand in hand, and the accuracy--novelty dilemma is one of the challenging problems in recommender systems, which needs efforts in making a trade-off between them.\n In this work, we propose an algorithm for providing novel and accurate recommendation to users. We consider the standard definition of accuracy and an effective self-information--based measure to assess novelty of the recommendation list. The proposed algorithm is based on item popularity, which is defined as the number of votes received in a certain time interval. Wavelet transform is used for analyzing popularity time series and forecasting their trend in future timesteps. We introduce two filtering algorithms based on the information extracted from analyzing popularity time series of the items. The popularity-based filtering algorithm gives a higher chance to items that are predicted to be popular in future timesteps. The other algorithm, denoted as a novelty and population-based filtering algorithm, is to move toward items with low popularity in past timesteps that are predicted to become popular in the future. The introduced filters can be applied as adds-on to any recommendation algorithm. In this article, we use the proposed algorithms to improve the performance of classic recommenders, including item-based collaborative filtering and Markov-based recommender systems. The experiments show that the algorithms could significantly improve both the accuracy and effective novelty of the classic recommenders.",
"title": ""
}
] | [
{
"docid": "8ae1ef032c0a949aa31b3ca8bc024cb5",
"text": "Measuring intellectual capital is on the agenda of most 21st century organisations. This paper takes a knowledge-based view of the firm and discusses the importance of measuring organizational knowledge assets. Knowledge assets underpin capabilities and core competencies of any organisation. Therefore, they play a key strategic role and need to be measured. This paper reviews the existing approaches for measuring knowledge based assets and then introduces the knowledge asset map which integrates existing approaches in order to achieve comprehensiveness. The paper then introduces the knowledge asset dashboard to clarify the important actor/infrastructure relationship, which elucidates the dynamic nature of these assets. Finally, the paper suggests to visualise the value pathways of knowledge assets before designing strategic key performance indicators which can then be used to test the assumed causal relationships. This will enable organisations to manage and report these key value drivers in today’s economy. Introduction In the last decade management literature has paid significant attention to the role of knowledge for global competitiveness in the 21st century. It is recognised as a durable and more sustainable strategic resource to acquire and maintain competitive advantages (Barney, 1991a; Drucker, 1988; Grant, 1991a). Today’s business world is characterised by phenomena such as e-business, globalisation, higher degrees of competitiveness, fast evolution of new technology, rapidly changing client demands, as well as changing economic and political structures. In this new context companies need to develop clearly defined strategies that will give them a competitive advantage (Porter, 2001; Barney, 1991a). For this, organisations have to understand which capabilities they need in order to gain and maintain this competitive advantage (Barney, 1991a; Prahalad and Hamel, 1990). Organizational capabilities are based on knowledge. Thus, knowledge is a resource that forms the foundation of the company’s capabilities. Capabilities combine to The Emerald Research Register for this journal is available at The current issue and full text archive of this journal is available at www.emeraldinsight.com/researchregister www.emeraldinsight.com/1463-7154.htm The authors would like to thank, Göran Roos, Steven Pike, Oliver Gupta, as well as the two anonymous reviewers for their valuable comments which helped us to improve this paper. Intellectual capital",
"title": ""
},
{
"docid": "6eed03674521ecf9a558ab0059fc167f",
"text": "University professors traditionally struggle to incorporate software testing into their course curriculum. Worries include double-grading for correctness of both source and test code and finding time to teach testing as a topic. Test-driven development (TDD) has been suggested as a possible solution to improve student software testing skills and to realize the benefits of testing. According to most existing studies, TDD improves software quality and student productivity. This paper surveys the current state of TDD experiments conducted exclusively at universities. Similar surveys compare experiments in both the classroom and industry, but none have focused strictly on academia.",
"title": ""
},
{
"docid": "23305a36194ad3c9b6b3f667c79bd273",
"text": "Evidence used to reconstruct the morphology and function of the brain (and the rest of the central nervous system) in fossil hominin species comes from the fossil and archeological records. Although the details provided about human brain evolution are scarce, they benefit from interpretations informed by interspecific comparative studies and, in particular, human pathology studies. In recent years, new information has come to light about fossil DNA and ontogenetic trajectories, for which pathology research has significant implications. We briefly describe and summarize data from the paleoarcheological and paleoneurological records about the evolution of fossil hominin brains, including behavioral data most relevant to brain research. These findings are brought together to characterize fossil hominin taxa in terms of brain structure and function and to summarize brain evolution in the human lineage.",
"title": ""
},
{
"docid": "e9796b98d8f0bc81e1720be5431d2024",
"text": "Flexible structures may fall victim to excessive levels of vibration under the action of wind, adversely affecting serviceability and occupant comfort. To ensure the functional performance of flexible structures, various design modifications are possible, ranging from alternative structural systems to the utilization of passive and active control devices. This paper presents an overview of state-of-the-art measures to reduce structural response of buildings, including a summary of recent work in aerodynamic tailoring and a discussion of auxiliary damping devices for mitigating the wind-induced motion of structures. In addition, some discussion of the application of such devices to improve structural resistance to seismic events is also presented, concluding with detailed examples of the application of auxiliary damping devices in Australia, Canada, China, Japan, and the United States.",
"title": ""
},
{
"docid": "70b1e0badf7505e480af00014572140c",
"text": "Title of Dissertation: Simulation-Based Algorithms for Markov Decision Processes Ying He, Doctor of Philosophy, 2002 Dissertation directed by: Professor Steven I. Marcus Department of Electrical & Computer Engineering Professor Michael C. Fu Department of Decision & Information Technologies Problems of sequential decision making under uncertainty are common in manufacturing, computer and communication systems, and many such problems can be formulated as Markov Decision Processes (MDPs). Motivated by a capacity expansion and allocation problem in semiconductor manufacturing, we formulate a fab-level decision making problem using a finite-horizon transient MDP model that can integrate life cycle dynamics of the fab and provide a trade-off between immediate and future benefits and costs. However, for large and complicated systems formulated as MDPs, the classical methodology to compute optimal policies, dynamic programming, suffers from the so-called “curse of dimensionality” (computational requirement increases exponentially with number of states /controls) and “curse of modeling” (an explicit model for the cost structure and/or the transition probabilities is not available). In problem settings to which our approaches apply, instead of the explicit transition probabilities, outputs are available from either a simulation model or from the actual system. Our methodology is first to find the structure of optimal policies for some special cases, and then to use the structure to construct parameterized heuristic policies for more general cases and implement simulationbased algorithms to determine parameters of the heuristic policies. For the fab-level decision-making problem, we analyze the structure of the optimal policy for a special “one-machine, two-product” case, and discuss the applicability of simulation-based algorithms. We develop several simulation-based algorithms for MDPs to overcome the difficulties of “curse of dimensionality” and “curse of modeling”, considering both theoretical and practical issues. First, we develop a simulation-based policy iteration algorithm for average cost problems under a unichain assumption, relaxing the common recurrent state assumption. Second, for weighted cost problems, we develop a new two-timescale simulation-based gradient algorithms based on perturbation analysis, provide a theoretical convergence proof, and compare it with two recently proposed simulation-based gradient algorithms. Third, we propose two new Simultaneous Perturbation Stochastic Approximation (SPSA) algorithms for weighted cost problems and verify their effectiveness via simulation; then, we consider a general SPSA algorithm for function minimization and show its convergence under a weaker assumption: the function does not have to be differentiable. To Yingjiu and my parents ...",
"title": ""
},
{
"docid": "7639c7333339605c677da0a766618c1b",
"text": "This paper presents a general theoretical framework for ensemble methods of constructing signiicantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It eeciently uses all the networks of a population-none of the networks need be discarded. 2) It eeciently uses all the available data for training without over-tting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid over-tting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in closed form. Experimental results are provided which show that the ensemble method dramatically improves neural network performance on diicult real-world optical character recognition tasks.",
"title": ""
},
{
"docid": "c0204869607a36bf85452fad89153b9c",
"text": "Weather factors such as temperature and rainfall in residential areas and tourist destinations affect traffic flow on the surrounding roads. In this study, we attempt to find new knowledge between traffic congestion and weather by using big data processing technology. Changes in traffic congestion due to the weather are evaluated by using multiple linear regression analysis to create a prediction model and forecast traffic congestion on a daily basis. For the regression analysis, we use 48 weather forecasting factors and six dummy variables to express the days of the week. The final multiple linear regression model is then proposed based on the three analytical steps of (i) the creation of the full regression model, (ii) the removal of the variables, and (iii) residual analysis. We find that the R-squared value of the proposed model has an explanatory power of 0.6555. To verify its predictability, the proposed model then evaluates traffic congestion in July and August 2014 by comparing predicted traffic congestion with actual traffic congestion. By using the mean absolute percentage error valuation method, we show that the final multiple linear regression model has a prediction accuracy of 84.8%.",
"title": ""
},
{
"docid": "e9474d646b9da5e611475f4cdfdfc30e",
"text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.",
"title": ""
},
{
"docid": "4fa73e04ccc8620c12aaea666ea366a6",
"text": "The popularity of the Web and Internet commerce provides many extremely large datasets from which information can be gleaned by data mining. This book focuses on practical algorithms that have been used to solve key problems in data mining and can be used on even the largest datasets. It begins with a discussion of the map-reduce framework, an important tool for parallelizing algorithms automatically. The tricks of locality-sensitive hashing are explained. This body of knowledge, which deserves to be more widely known, is essential when seeking similar objects in a very large collection without having to compare each pair of objects. Stream processing algorithms for mining data that arrives too fast for exhaustive processing are also explained. The PageRank idea and related tricks for organizing the Web are covered next. Other chapters cover the problems of finding frequent itemsets and clustering, each from the point of view that the data is too large to fit in main memory, and two applications: recommendation systems and Web advertising, each vital in e-commerce. This second edition includes new and extended coverage on social networks, machine learning and dimensionality reduction. Written by leading authorities in database and web technologies, it is essential reading for students and practitioners alike",
"title": ""
},
{
"docid": "342bcd2509b632480c4f4e8059cfa6a1",
"text": "This paper introduces the design and development of a novel axial-flux permanent magnet generator (PMG) using a printed circuit board (PCB) stator winding. This design has the mechanical rigidity, high efficiency and zero cogging torque required for a low speed water current turbine. The PCB stator has simplified the design and construction and avoids any slip rings. The flexible PCB winding represents an ultra thin electromagnetic exciting source where coils are wound in a wedge shape. The proposed multi-poles generator can be used for various low speed applications especially in small marine current energy conversion systems.",
"title": ""
},
{
"docid": "ae19bd4334434cfb8c5ac015dc8d3bd4",
"text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.",
"title": ""
},
{
"docid": "934ca8aa2798afd6e7cd4acceeed839a",
"text": "This paper begins with an argument that most measure development in the social sciences, with its reliance on correlational techniques as a tool, falls short of the requirements for constructing meaningful, unidimensional measures of human attributes. By demonstrating how rating scales are ordinal-level data, we argue the necessity of converting these to equal-interval units to develop a measure that is both qualitatively and quantitatively defensible. This requires that the empirical results and theoretical explanation are questioned and adjusted at each step of the process. In our response to the reviewers, we describe how this approach was used to develop the Game Engagement Questionnaire (GEQ), including its emphasis on examining a continuum of involvement in violent video games. The GEQ is an empirically sound measure focused on one player characteristic that may be important in determining game influence.",
"title": ""
},
{
"docid": "9acc94dd0f1cb229f15b2f833965e197",
"text": "Loitering is a suspicious behavior that often leads to criminal actions, such as pickpocketing and illegal entry. Tracking methods can determine suspicious behavior based on trajectory, but require continuous appearance and are difficult to scale up to multi-camera systems. Using the duration of appearance of features works on multiple cameras, but does not consider major aspects of loitering behavior, such as repeated appearance and trajectory of candidates. We introduce an entropy model that maps the location of a person's features on a heatmap. It can be used as an abstraction of trajectory tracking across multiple surveillance cameras. We evaluate our method over several datasets and compare it to other loitering detection methods. The results show that our approach has similar results to state of the art, but can provide additional interesting candidates.",
"title": ""
},
{
"docid": "f700b168c98d235a7fb76581cc24717f",
"text": "It is becoming increasingly easy to automatically replace a face of one person in a video with the face of another person by using a pre-trained generative adversarial network (GAN). Recent public scandals, e.g., the faces of celebrities being swapped onto pornographic videos, call for automated ways to detect these Deepfake videos. To help developing such methods, in this paper, we present the first publicly available set of Deepfake videos generated from videos of VidTIMIT database. We used open source software based on GANs to create the Deepfakes, and we emphasize that training and blending parameters can significantly impact the quality of the resulted videos. To demonstrate this impact, we generated videos with low and high visual quality (320 videos each) using differently tuned parameter sets. We showed that the state of the art face recognition systems based on VGG and Facenet neural networks are vulnerable to Deepfake videos, with 85.62% and 95.00% false acceptance rates (on high quality versions) respectively, which means methods for detecting Deepfake videos are necessary. By considering several baseline approaches, we found that audio-visual approach based on lipsync inconsistency detection was not able to distinguish Deepfake videos. The best performing method, which is based on visual quality metrics and is often used in presentation attack detection domain, resulted in 8.97% equal error rate on high quality Deepfakes. Our experiments demonstrate that GAN-generated Deepfake videos are challenging for both face recognition systems and existing detection methods, and the further development of face swapping technology will make it even more so.",
"title": ""
},
{
"docid": "8a1b37cf4d0632270f83a0826535c38a",
"text": "Magnetic resonance imaging (MRI) examinations provide high-resolution information about the anatomic structure of the kidneys and are used to measure total kidney volume (TKV) in patients with Autosomal Dominant Polycystic Kidney Disease (ADPKD). Height-adjusted TKV (HtTKV) has become the gold-standard imaging biomarker for ADPKD progression at early stages of the disease when estimated glomerular filtration rate (eGFR) is still normal. However, HtTKV does not take advantage of the wealth of information provided by MRI. Here we tested whether image texture features provide additional insights into the ADPKD kidney that may be used as complementary information to existing biomarkers. A retrospective cohort of 122 patients from the Consortium for Radiologic Imaging Studies of Polycystic Kidney Disease (CRISP) study was identified who had T2-weighted MRIs and eGFR values over 70 mL/min/1.73m2 at the time of their baseline scan. We computed nine distinct image texture features for each patient. The ability of each feature to predict subsequent progression to CKD stage 3A, 3B, and 30% reduction in eGFR at eight-year follow-up was assessed. A multiple linear regression model was developed incorporating age, baseline eGFR, HtTKV, and three image texture features identified by stability feature selection (Entropy, Correlation, and Energy). Including texture in a multiple linear regression model (predicting percent change in eGFR) improved Pearson correlation coefficient from -0.51 (using age, eGFR, and HtTKV) to -0.70 (adding texture). Thus, texture analysis offers an approach to refine ADPKD prognosis and should be further explored for its utility in individualized clinical decision making and outcome prediction.",
"title": ""
},
{
"docid": "97adbe6b157cd5d411788d18520612a3",
"text": "MicroProteins (miPs) are short, usually single-domain proteins that, in analogy to miRNAs, heterodimerize with their targets and exert a dominant-negative effect. Recent bioinformatic attempts to identify miPs have resulted in a list of potential miPs, many of which lack the defining characteristics of a miP. In this opinion article, we clearly state the characteristics of a miP as evidenced by known proteins that fit the definition; we explain why modulatory proteins misrepresented as miPs do not qualify as true miPs. We also discuss the evolutionary history of miPs, and how the miP concept can extend beyond transcription factors (TFs) to encompass different non-TF proteins that require dimerization for full function.",
"title": ""
},
{
"docid": "c3a9ccc724f388399c25938a33123bd5",
"text": "Using a unique high-frequency futures dataset, we characterize the response of U.S., German and British stock, bond and foreign exchange markets to real-time U.S. macroeconomic news. We find that news produces conditional mean jumps; hence high-frequency stock, bond and exchange rate dynamics are linked to fundamentals. Equity markets, moreover, react differently to news depending on the stage of the business cycle, which explains the low correlation between stock and bond returns when averaged over the cycle. Hence our results qualify earlier work suggesting that bond markets react most strongly to macroeconomic news; in particular, when conditioning on the state of the economy, the equity and foreign Journal of International Economics 73 (2007) 251–277 www.elsevier.com/locate/econbase ☆ This work was supported by the National Science Foundation, the Guggenheim Foundation, the BSI Gamma Foundation, and CREATES. For useful comments we thank the Editor and referees, seminar participants at the Bank for International Settlements, the BSI Gamma Foundation, the Symposium of the European Central Bank/Center for Financial Studies Research Network, the NBER International Finance and Macroeconomics program, and the American Economic Association Annual Meetings, as well as Rui Albuquerque, Annika Alexius, Boragan Aruoba, Anirvan Banerji, Ben Bernanke, Robert Connolly, Jeffrey Frankel, Lingfeng Li, Richard Lyons, Marco Pagano, Paolo Pasquariello, and Neng Wang. ⁎ Corresponding author. Department of Economics, University of Pennsylvania, 3718 Locust Walk Philadelphia, PA 19104-6297, United States. Tel.: +1 215 898 1507; fax: +1 215 573 4217. E-mail addresses: [email protected] (T.G. Andersen), [email protected] (T. Bollerslev), [email protected] (F.X. Diebold), [email protected] (C. Vega). 0022-1996/$ see front matter © 2007 Elsevier B.V. All rights reserved. doi:10.1016/j.jinteco.2007.02.004 exchange markets appear equally responsive. Finally, we also document important contemporaneous links across all markets and countries, even after controlling for the effects of macroeconomic news. © 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "13476dc47793a50200c97ec896b92cf2",
"text": "Many promising therapeutic agents are limited by their inability to reach the systemic circulation, due to the excellent barrier properties of biological membranes, such as the stratum corneum (SC) of the skin or the sclera/cornea of the eye and others. The outermost layer of the skin, the SC, is the principal barrier to topically-applied medications. The intact SC thus provides the main barrier to exogenous substances, including drugs. Only drugs with very specific physicochemical properties (molecular weight < 500 Da, adequate lipophilicity, and low melting point) can be successfully administered transdermally. Transdermal delivery of hydrophilic drugs and macromolecular agents of interest, including peptides, DNA, and small interfering RNA is problematic. Therefore, facilitation of drug penetration through the SC may involve by-pass or reversible disruption of SC molecular architecture. Microneedles (MNs), when used to puncture skin, will by-pass the SC and create transient aqueous transport pathways of micron dimensions and enhance the transdermal permeability. These micropores are orders of magnitude larger than molecular dimensions, and, therefore, should readily permit the transport of hydrophilic macromolecules. Various strategies have been employed by many research groups and pharmaceutical companies worldwide, for the fabrication of MNs. This review details various types of MNs, fabrication methods and, importantly, investigations of clinical safety of MN.",
"title": ""
},
{
"docid": "7e647cac9417bf70acd8c0b4ee0faa9b",
"text": "Global Navigation Satellite Systems (GNSS) are applicable to deliver train locations in real time. This train localization function should comply with railway functional safety standards; thus, the GNSS performance needs to be evaluated in consistent with railway EN 50126 standard [Reliability, Availability, Maintainability, and Safety (RAMS)]. This paper demonstrates the performance of the GNSS receiver for train localization. First, the GNSS performance and railway RAMS properties are compared by definitions. Second, the GNSS receiver measurements are categorized into three states (i.e., up, degraded, and faulty states). The relations between the states are illustrated in a stochastic Petri net model. Finally, the performance properties are evaluated using real data collected on the railway track in High Tatra Mountains in Slovakia. The property evaluation is based on the definitions represented by the modeled states.",
"title": ""
},
{
"docid": "3ab8e2a7235d5100b8b65fbf9a088404",
"text": "In multi-label classification in the big data age, the number of classes can be in thousands, and obtaining sufficient training data for each class is infeasible. Zero-shot learning aims at predicting a large number of unseen classes using only labeled data from a small set of classes and external knowledge about class relations. However, previous zero-shot learning models passively accept labeled data collected beforehand, relinquishing the opportunity to select the proper set of classes to inquire labeled data and optimize the performance of unseen class prediction. To resolve this issue, we propose an active class selection strategy to intelligently query labeled data for a parsimonious set of informative classes. We demonstrate two desirable probabilistic properties of the proposed method that can facilitate unseen classes prediction. Experiments on 4 text datasets demonstrate that the active zero-shot learning algorithm is superior to a wide spectrum of baselines. We indicate promising future directions at the end of this paper.",
"title": ""
}
] | scidocsrr |
a54d1e9f745295cc76b789e03f97e8b6 | The Demographics of Mail Search and their Application to Query Suggestion | [
{
"docid": "99f93328d19ac240378c5cfe08cf9f9e",
"text": "Email classification is still a mostly manual task. Consequently, most Web mail users never define a single folder. Recently however, automatic classification offering the same categories to all users has started to appear in some Web mail clients, such as AOL or Gmail. We adopt this approach, rather than previous (unsuccessful) personalized approaches because of the change in the nature of consumer email traffic, which is now dominated by (non-spam) machine-generated email. We propose here a novel approach for (1) automatically distinguishing between personal and machine-generated email and (2) classifying messages into latent categories, without requiring users to have defined any folder. We report how we have discovered that a set of 6 \"latent\" categories (one for human- and the others for machine-generated messages) can explain a significant portion of email traffic. We describe in details the steps involved in building a Web-scale email categorization system, from the collection of ground-truth labels, the selection of features to the training of models. Experimental evaluation was performed on more than 500 billion messages received during a period of six months by users of Yahoo mail service, who elected to be part of such research studies. Our system achieved precision and recall rates close to 90% and the latent categories we discovered were shown to cover 70% of both email traffic and email search queries. We believe that these results pave the way for a change of approach in the Web mail industry, and could support the invention of new large-scale email discovery paradigms that had not been possible before.",
"title": ""
},
{
"docid": "57ba9e280303078261d4384dd9407f92",
"text": "People often repeat Web searches, both to find new information on topics they have previously explored and to re-find information they have seen in the past. The query associated with a repeat search may differ from the initial query but can nonetheless lead to clicks on the same results. This paper explores repeat search behavior through the analysis of a one-year Web query log of 114 anonymous users and a separate controlled survey of an additional 119 volunteers. Our study demonstrates that as many as 40% of all queries are re-finding queries. Re-finding appears to be an important behavior for search engines to explicitly support, and we explore how this can be done. We demonstrate that changes to search engine results can hinder re-finding, and provide a way to automatically detect repeat searches and predict repeat clicks.",
"title": ""
}
] | [
{
"docid": "cf8915016c6a6d6537fbd368238c81f3",
"text": "A 5-year-old boy was followed up with migratory spermatic cord and a perineal tumour at the paediatric department after birth. He was born by Caesarean section at 38 weeks in viviparity. Weight at birth was 3650 g. Although a meningocele in the sacral region was found by MRI, there were no symptoms in particular and no other deformity was found. When he was 4 years old, he presented to our department with the perinal tumour. On examination, a slender scrotum-like tumour covering the centre of the perineal lesion, along with inflammation and ulceration around the skin of the anus, was observed. Both testes and scrotums were observed in front of the tumour (Figure 1a). An excision of the tumour and Z-plasty of the perineal lesion were performed. The subcutaneous tissue consisted of adipose tissue-like lipoma and was resected along with the tumour (Figure 1b). A Z-plasty was carefully performed in order to maintain the lefteright symmetry of the",
"title": ""
},
{
"docid": "af9c94a8d4dcf1122f70f5d0b90a247f",
"text": "New cloud services are being developed to support a wide variety of real-life applications. In this paper, we introduce a new cloud service: industrial automation, which includes different functionalities from feedback control and telemetry to plant optimization and enterprise management. We focus our study on the feedback control layer as the most time-critical and demanding functionality. Today's large-scale industrial automation projects are expensive and time-consuming. Hence, we propose a new cloud-based automation architecture, and we analyze cost and time savings under the proposed architecture. We show that significant cost and time savings can be achieved, mainly due to the virtualization of controllers and the reduction of hardware cost and associated labor. However, the major difficulties in providing cloud-based industrial automation systems are timeliness and reliability. Offering automation functionalities from the cloud over the Internet puts the controlled processes at risk due to varying communication delays and potential failure of virtual machines and/or links. Thus, we design an adaptive delay compensator and a distributed fault tolerance algorithm to mitigate delays and failures, respectively. We theoretically analyze the performance of the proposed architecture when compared to the traditional systems and prove zero or negligible change in performance. To experimentally evaluate our approach, we implement our controllers on commercial clouds and use them to control: (i) a physical model of a solar power plant, where we show that the fault-tolerance algorithm effectively makes the system unaware of faults, and (ii) industry-standard emulation with large injected delays and disturbances, where we show that the proposed cloud-based controllers perform indistinguishably from the best-known counterparts: local controllers.",
"title": ""
},
{
"docid": "7d0ebf939deed43253d5360e325c3e8e",
"text": "Roughly speaking, clustering evolving networks aims at detecting structurally dense subgroups in networks that evolve over time. This implies that the subgroups we seek for also evolve, which results in many additional tasks compared to clustering static networks. We discuss these additional tasks and difficulties resulting thereof and present an overview on current approaches to solve these problems. We focus on clustering approaches in online scenarios, i.e., approaches that incrementally use structural information from previous time steps in order to incorporate temporal smoothness or to achieve low running time. Moreover, we describe a collection of real world networks and generators for synthetic data that are often used for evaluation.",
"title": ""
},
{
"docid": "53dc606897bd6388c729cc8138027b31",
"text": "Abstract|This paper presents transient stability and power ow models of Thyristor Controlled Reactor (TCR) and Voltage Sourced Inverter (VSI) based Flexible AC Transmission System (FACTS) Controllers. Models of the Static VAr Compensator (SVC), the Thyristor Controlled Series Compensator (TCSC), the Static VAr Compensator (STATCOM), the Static Synchronous Source Series Compensator (SSSC), and the Uni ed Power Flow Controller (UPFC) appropriate for voltage and angle stability studies are discussed in detail. Validation procedures obtained for a test system with a detailed as well as a simpli ed UPFC model are also presented and brie y discussed.",
"title": ""
},
{
"docid": "b1e4fb97e4b1d31e4064f174e50f17d3",
"text": "We propose an inverse reinforcement learning (IRL) approach using Deep QNetworks to extract the rewards in problems with large state spaces. We evaluate the performance of this approach in a simulation-based autonomous driving scenario. Our results resemble the intuitive relation between the reward function and readings of distance sensors mounted at different poses on the car. We also show that, after a few learning rounds, our simulated agent generates collision-free motions and performs human-like lane change behaviour.",
"title": ""
},
{
"docid": "58d19a5460ce1f830f7a5e2cb1c5ebca",
"text": "In multi-source sequence-to-sequence tasks, the attention mechanism can be modeled in several ways. This topic has been thoroughly studied on recurrent architectures. In this paper, we extend the previous work to the encoder-decoder attention in the Transformer architecture. We propose four different input combination strategies for the encoderdecoder attention: serial, parallel, flat, and hierarchical. We evaluate our methods on tasks of multimodal translation and translation with multiple source languages. The experiments show that the models are able to use multiple sources and improve over single source baselines.",
"title": ""
},
{
"docid": "a48a88e3e6e35779392f5dea132d49f2",
"text": "Community detection emerged as an important exploratory task in complex networks analysis across many scientific domains. Many methods have been proposed to solve this problem, each one with its own mechanism and sometimes with a different notion of community. In this article, we bring most common methods in the literature together in a comparative approach and reveal their performances in both real-world networks and synthetic networks. Surprisingly, many of those methods discovered better communities than the declared ground-truth communities in terms of some topological goodness features, even on benchmarking networks with built-in communities. We illustrate different structural characteristics that these methods could identify in order to support users to choose an appropriate method according to their specific requirements on different structural qualities.",
"title": ""
},
{
"docid": "d0ec144c5239b532987157a64d499f61",
"text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.",
"title": ""
},
{
"docid": "37482eea1f087101011ba48ac8923ecb",
"text": "Routers classify packets to determine which flow they belong to, and to decide what service they should receive. Classification may, in general, be based on an arbitrary number of fields in the packet header. Performing classification quickly on an arbitrary number of fields is known to be difficult, and has poor worst-case performance. In this paper, we consider a number of classifiers taken from real networks. We find that the classifiers contain considerable structure and redundancy that can be exploited by the classification algorithm. In particular, we find that a simple multi-stage classification algorithm, called RFC (recursive flow classification), can classify 30 million packets per second in pipelined hardware, or one million packets per second in software.",
"title": ""
},
{
"docid": "f1f424a703eefaabe8c704bd07e21a21",
"text": "It is more convincing for users to have their own 3-D body shapes in the virtual fitting room when they shop clothes online. However, existing methods are limited for ordinary users to efficiently and conveniently access their 3-D bodies. We propose an efficient data-driven approach and develop an android application for 3-D body customization. Users stand naturally and their photos are taken from front and side views with a handy phone camera. They can wear casual clothes like a short-sleeved/long-sleeved shirt and short/long pants. First, we develop a user-friendly interface to semi-automatically segment the human body from photos. Then, the segmented human contours are scaled and translated to the ones under our virtual camera configurations. Through this way, we only need one camera to take photos of human in two views and do not need to calibrate the camera, which satisfy the convenience requirement. Finally, we learn body parameters that determine the 3-D body from dressed-human silhouettes with cascaded regressors. The regressors are trained using a database containing 3-D naked and dressed body pairs. Body parameters regression only costs 1.26 s on an android phone, which ensures the efficiency of our method. We invited 12 volunteers for tests, and the mean absolute estimation error for chest/waist/hip size is 2.89/1.93/2.22 centimeters. We additionally use 637 synthetic data to evaluate the main procedures of our approach.",
"title": ""
},
{
"docid": "b9dfc489ff1bf6907929a450ea614d0b",
"text": "Internet of things (IoT) is going to be ubiquitous in the next few years. In the smart city initiative, millions of sensors will be deployed for the implementation of IoT related services. Even in the normal cellular architecture, IoT will be deployed as a value added service for several new applications. Such massive deployment of IoT sensors and devices would certainly cost a large sum of money. In addition to the cost of deployment, the running costs or the operational expenditure of the IoT networks will incur huge power bills and spectrum license charges. As IoT is going to be a pervasive technology, its sustainability and environmental effects too are important. Energy efficiency and overall resource optimization would make it the long term technology of the future. Therefore, green IoT is essential for the operators and the long term sustainability of IoT itself. In this article we consider the green initiatives being worked out for IoT. We also show that narrowband IoT as the greener version right now.",
"title": ""
},
{
"docid": "3c5e3f2fe99cb8f5b26a880abfe388f8",
"text": "Facial point detection is an active area in computer vision due to its relevance to many applications. It is a nontrivial task, since facial shapes vary significantly with facial expressions, poses or occlusion. In this paper, we address this problem by proposing a discriminative deep face shape model that is constructed based on an augmented factorized three-way Restricted Boltzmann Machines model. Specifically, the discriminative deep model combines the top-down information from the embedded face shape patterns and the bottom up measurements from local point detectors in a unified framework. In addition, along with the model, effective algorithms are proposed to perform model learning and to infer the true facial point locations from their measurements. Based on the discriminative deep face shape model, 68 facial points are detected on facial images in both controlled and “in-the-wild” conditions. Experiments on benchmark data sets show the effectiveness of the proposed facial point detection algorithm against state-of-the-art methods.",
"title": ""
},
{
"docid": "0f2023682deaf2eb70c7becd8b3375dd",
"text": "Generating answer with natural language sentence is very important in real-world question answering systems, which needs to obtain a right answer as well as a coherent natural response. In this paper, we propose an end-to-end question answering system called COREQA in sequence-to-sequence learning, which incorporates copying and retrieving mechanisms to generate natural answers within an encoder-decoder framework. Specifically, in COREQA, the semantic units (words, phrases and entities) in a natural answer are dynamically predicted from the vocabulary, copied from the given question and/or retrieved from the corresponding knowledge base jointly. Our empirical study on both synthetic and realworld datasets demonstrates the efficiency of COREQA, which is able to generate correct, coherent and natural answers for knowledge inquired questions.",
"title": ""
},
{
"docid": "4653c085c5b91107b5eb637e45364943",
"text": "Legged locomotion excels when terrains become too rough for wheeled systems or open-loop walking pattern generators to succeed, i.e., when accurate foot placement is of primary importance in successfully reaching the task goal. In this paper we address the scenario where the rough terrain is traversed with a static walking gait, and where for every foot placement of a leg, the location of the foot placement was selected irregularly by a planning algorithm. Our goal is to adjust a smooth walking pattern generator with the selection of every foot placement such that the COG of the robot follows a stable trajectory characterized by a stability margin relative to the current support triangle. We propose a novel parameterization of the COG trajectory based on the current position, velocity, and acceleration of the four legs of the robot. This COG trajectory has guaranteed continuous velocity and acceleration profiles, which leads to continuous velocity and acceleration profiles of the leg movement, which is ideally suited for advanced model-based controllers. Pitch, yaw, and ground clearance of the robot are easily adjusted automatically under any terrain situation. We evaluate our gait generation technique on the Little-Dog quadruped robot when traversing complex rocky and sloped terrains.",
"title": ""
},
{
"docid": "8bda640f73c3941272739a57a5d02353",
"text": "Researchers strive to understand eating behavior as a means to develop diets and interventions that can help people achieve and maintain a healthy weight, recover from eating disorders, or manage their diet and nutrition for personal wellness. A major challenge for eating-behavior research is to understand when, where, what, and how people eat. In this paper, we evaluate sensors and algorithms designed to detect eating activities, more specifically, when people eat. We compare two popular methods for eating recognition (based on acoustic and electromyography (EMG) sensors) individually and combined. We built a data-acquisition system using two off-the-shelf sensors and conducted a study with 20 participants. Our preliminary results show that the system we implemented can detect eating with an accuracy exceeding 90.9% while the crunchiness level of food varies. We are developing a wearable system that can capture, process, and classify sensor data to detect eating in real-time.",
"title": ""
},
{
"docid": "23d26c14a9aa480b98bcaa633fc378e5",
"text": "In this paper we present novel sensory feedbacks named ”King-Kong Effects” to enhance the sensation of walking in virtual environments. King Kong Effects are inspired by special effects in movies in which the incoming of a gigantic creature is suggested by adding visual vibrations/pulses to the camera at each of its steps. In this paper, we propose to add artificial visual or tactile vibrations (King-Kong Effects or KKE) at each footstep detected (or simulated) during the virtual walk of the user. The user can be seated, and our system proposes to use vibrotactile tiles located under his/her feet for tactile rendering, in addition to the visual display. We have designed different kinds of KKE based on vertical or lateral oscillations, physical or metaphorical patterns, and one or two peaks for heal-toe contacts simulation. We have conducted different experiments to evaluate the preferences of users navigating with or without the various KKE. Taken together, our results identify the best choices for future uses of visual and tactile KKE, and they suggest a preference for multisensory combinations. Our King-Kong effects could be used in a variety of VR applications targeting the immersion of a user walking in a 3D virtual scene.",
"title": ""
},
{
"docid": "d0c8a1faccfa3f0469e6590cc26097c8",
"text": "This paper introduces an automatic method for editing a portrait photo so that the subject appears to be wearing makeup in the style of another person in a reference photo. Our unsupervised learning approach relies on a new framework of cycle-consistent generative adversarial networks. Different from the image domain transfer problem, our style transfer problem involves two asymmetric functions: a forward function encodes example-based style transfer, whereas a backward function removes the style. We construct two coupled networks to implement these functions - one that transfers makeup style and a second that can remove makeup - such that the output of their successive application to an input photo will match the input. The learned style network can then quickly apply an arbitrary makeup style to an arbitrary photo. We demonstrate the effectiveness on a broad range of portraits and styles.",
"title": ""
},
{
"docid": "2a0b81bbe867a5936dafc323d8563970",
"text": "Social network analysis has gained significant attention in recent years, largely due to the success of online social networking and media-sharing sites, and the consequent availability of a wealth of social network data. In spite of the growing interest, however, there is little understanding of the potential business applications of mining social networks. While there is a large body of research on different problems and methods for social network mining, there is a gap between the techniques developed by the research community and their deployment in real-world applications. Therefore the potential business impact of these techniques is still largely unexplored.\n In this article we use a business process classification framework to put the research topics in a business context and provide an overview of what we consider key problems and techniques in social network analysis and mining from the perspective of business applications. In particular, we discuss data acquisition and preparation, trust, expertise, community structure, network dynamics, and information propagation. In each case we present a brief overview of the problem, describe state-of-the art approaches, discuss business application examples, and map each of the topics to a business process classification framework. In addition, we provide insights on prospective business applications, challenges, and future research directions. The main contribution of this article is to provide a state-of-the-art overview of current techniques while providing a critical perspective on business applications of social network analysis and mining.",
"title": ""
},
{
"docid": "2faf7fedadfd8b24c4740f7100cf5fec",
"text": "Lacking standardized extrinsic evaluation methods for vector representations of words, the NLP community has relied heavily onword similaritytasks as a proxy for intrinsic evaluation of word vectors. Word similarity evaluation, which correlates the distance between vectors and human judgments of “semantic similarity” is attractive, because it is computationally inexpensive and fast. In this paper we present several problems associated with the evaluation of word vectors on word similarity datasets, and summarize existing solutions. Our study suggests that the use of word similarity tasks for evaluation of word vectors is not sustainable and calls for further research on evaluation methods.",
"title": ""
}
] | scidocsrr |
285fda4fd9e274640892dff2a13211cb | Derivation of GFDM based on OFDM principles | [
{
"docid": "d1f8ee3d6dbc7ddc76b84ad2b0bfdd16",
"text": "Cognitive radio technology addresses the limited availability of wireless spectrum and inefficiency of spectrum usage. Cognitive Radio (CR) devices sense their environment, detect spatially unused spectrum and opportunistically access available spectrum without creating harmful interference to the incumbents. In cellular systems with licensed spectrum, the efficient utilization of the spectrum as well as the protection of primary users is equally important, which imposes opportunities and challenges for the application of CR. This paper introduces an experimental framework for 5G cognitive radio access in current 4G LTE cellular systems. It can be used to study CR concepts in different scenarios, such as 4G to 5G system migrations, machine-type communications, device-to-device communications, and load balancing. Using our framework, selected measurement results are presented that compare Long Term Evolution (LTE) Orthogonal Frequency Division Multiplex (OFDM) with a candidate 5G waveform called Generalized Frequency Division Multiplexing (GFDM) and quantify the benefits of GFDM in CR scenarios.",
"title": ""
},
{
"docid": "d23fc72c7fb3cbbc9120d2ab9fc14e75",
"text": "Generalized frequency division multiplexing (GFDM) is a new concept that can be seen as a generalization of traditional OFDM. The scheme is based on the filtered multi-carrier approach and can offer an increased flexibility, which will play a significant role in future cellular applications. In this paper we present the benefits of the pulse shaped carriers in GFDM. We show that based on the FFT/IFFT algorithm, the scheme can be implemented with reasonable computational effort. Further, to be able to relate the results to the recent LTE standard, we present a suitable set of parameters for GFDM.",
"title": ""
}
] | [
{
"docid": "2d17b30942ce0984dcbcf5ca5ba38bd2",
"text": "We review the literature on the relation between narcissism and consumer behavior. Consumer behavior is sometimes guided by self-related motives (e.g., self-enhancement) rather than by rational economic considerations. Narcissism is a case in point. This personality trait reflects a self-centered, self-aggrandizing, dominant, and manipulative orientation. Narcissists are characterized by exhibitionism and vanity, and they see themselves as superior and entitled. To validate their grandiose self-image, narcissists purchase high-prestige products (i.e., luxurious, exclusive, flashy), show greater interest in the symbolic than utilitarian value of products, and distinguish themselves positively from others via their materialistic possessions. Our review lays the foundation for a novel methodological approach in which we explore how narcissism influences eye movement behavior during consumer decision-making. We conclude with a description of our experimental paradigm and report preliminary results. Our findings will provide insight into the mechanisms underlying narcissists' conspicuous purchases. They will also likely have implications for theories of personality, consumer behavior, marketing, advertising, and visual cognition.",
"title": ""
},
{
"docid": "c3d06acdf8b74535fa22ed08420d5433",
"text": "Generative adversarial networks have been shown to generate very realistic images by learning through a min-max game. Furthermore, these models are known to model image spaces more easily when conditioned on class labels. In this work, we consider conditioning on fine-grained textual descriptions, thus also enabling us to produce realistic images that correspond to the input text description. Additionally, we consider the task of learning disentangled representations for images through special latent codes, such that we can move them as knobs to alter the generated image. These latent codes take on very interpretable roles and are learnt in a completely unsupervised manner, using ideas from InfoGAN. We show that the learnt latent codes that encode much more variance and semantic interpretability as compared to standard GANs by experimenting on two datasets.",
"title": ""
},
{
"docid": "b4c73776e6a1004f75991df0a26ad407",
"text": "Recurrent urinary tract infections (UTIs) are common, especially in women. Low-dose daily or postcoital antimicrobial prophylaxis is effective for prevention of recurrent UTIs and women can self-diagnose and self-treat a new UTI with antibiotics. The increasing resistance rates of Escherichia coli to antimicrobial agents has, however, stimulated interest in nonantibiotic methods for the prevention of UTIs. This article reviews the literature on efficacy of different forms of nonantibiotic prophylaxis. Future studies with lactobacilli strains (oral and vaginal) and the oral immunostimulant OM-89 are warranted.",
"title": ""
},
{
"docid": "c9b7832cd306fc022e4a376f10ee8fc8",
"text": "This paper describes a study to assess the influence of a variety of factors on reported level of presence in immersive virtual environments. It introduces the idea of stacking depth, that is, where a participant can simulate the process of entering the virtual environment while already in such an environment, which can be repeated to several levels of depth. An experimental study including 24 subjects was carried out. Half of the subjects were transported between environments by using virtual head-mounted displays, and the other half by going through doors. Three other binary factors were whether or not gravity operated, whether or not the subject experienced a virtual precipice, and whether or not the subject was followed around by a virtual actor. Visual, auditory, and kinesthetic representation systems and egocentric/exocentric perceptual positions were assessed by a preexperiment questionnaire. Presence was assessed by the subjects as their sense of being there, the extent to which they experienced the virtual environments as more the presenting reality than the real world in which the experiment was taking place, and the extent to which the subject experienced the virtual environments as places visited rather than images seen. A logistic regression analysis revealed that subjective reporting of presence was significantly positively associated with visual and kinesthetic representation systems, and negatively with the auditory system. This was not surprising since the virtual reality system used was primarily visual. The analysis also showed a significant and positive association with stacking level depth for those who were transported between environments by using the virtual HMD, and a negative association for those who were transported through doors. Finally, four of the subjects moved their real left arm to match movement of the left arm of the virtual body displayed by the system. These four scored significantly higher on the kinesthetic representation system than the remainder of the subjects.",
"title": ""
},
{
"docid": "4b6a4f9d91bc76c541f4879a1a684a3f",
"text": "Query auto-completion (QAC) is one of the most prominent features of modern search engines. The list of query candidates is generated according to the prefix entered by the user in the search box and is updated on each new key stroke. Query prefixes tend to be short and ambiguous, and existing models mostly rely on the past popularity of matching candidates for ranking. However, the popularity of certain queries may vary drastically across different demographics and users. For instance, while instagram and imdb have comparable popularities overall and are both legitimate candidates to show for prefix i, the former is noticeably more popular among young female users, and the latter is more likely to be issued by men.\n In this paper, we present a supervised framework for personalizing auto-completion ranking. We introduce a novel labelling strategy for generating offline training labels that can be used for learning personalized rankers. We compare the effectiveness of several user-specific and demographic-based features and show that among them, the user's long-term search history and location are the most effective for personalizing auto-completion rankers. We perform our experiments on the publicly available AOL query logs, and also on the larger-scale logs of Bing. The results suggest that supervised rankers enhanced by personalization features can significantly outperform the existing popularity-based base-lines, in terms of mean reciprocal rank (MRR) by up to 9%.",
"title": ""
},
{
"docid": "0a2d9103ca2b5c6b4c1f1efef3143d4f",
"text": "Recently, a number of coding techniques have been reported to achieve near toll quality synthesized speech at bit-rates around 4 kb/s. These include variants of Code Excited Linear Prediction (CELP), Sinusoidal Transform Coding (STC) and Multi-Band Excitation (MBE). While CELP has been an effective technique for bit-rates above 6 kb/s, STC, MBE, Waveform Interpolation (WI) and Mixed Excitation Linear Prediction (MELP) [1, 2] models seem to be attractive at bit-rates below 3 kb/s. In this paper, we present a system to encode speech with high quality using MELP, a technique previously demonstrated to be effective at bit-rates of 1.6–2.4 kb/s. We have enhanced the MELP model producing significantly higher speech quality at bit-rates above 2.4 kb/s. We describe the development and testing of a high quality 4 kb/s MELP coder.",
"title": ""
},
{
"docid": "381103e7aced15dbc42fd643e0bf69c7",
"text": "Lifted graphical models provide a language for expressing dependencies between different types of entities, their attributes, and their diverse relations, as well as techniques for probabilistic reasoning in such multi-relational domains. In this survey, we review a general form for a lifted graphical model, a par-factor graph, and show how a number of existing statistical relational representations map to this formalism. We discuss inference algorithms, including lifted inference algorithms, that efficiently compute the answers to probabilistic queries over such models. We also review work in learning lifted graphical models from data. There is a growing need for statistical relational models (whether they go by that name or another), as we are inundated with data which is a mix of structured and unstructured, with entities and relations extracted in a noisy manner from text, and with the need to reason effectively with this data. We hope that this synthesis of ideas from many different research groups will provide an accessible starting point for new researchers in this expanding field.",
"title": ""
},
{
"docid": "ded208999b66a677d90b9e713f3d32ed",
"text": "We present Spectrogram, a machine learning based statistical anomaly detection (AD) sensor for defense against web-layer code-injection attacks. These attacks include PHP file inclusion, SQL-injection and cross-sitescripting; memory-layer exploits such as buffer overflows are addressed as well. Statistical AD sensors offer the advantage of being driven by the data that is being protected and not by malcode samples captured in the wild. While models using higher order statistics can often improve accuracy, trade-offs with false-positive rates and model efficiency remain a limiting usability factor. This paper presents a new model and sensor framework that offers a favorable balance under this constraint and demonstrates improvement over some existing approaches. Spectrogram is a network situated sensor that dynamically assembles packets to reconstruct content flows and learns to recognize legitimate web-layer script input. We describe an efficient model for this task in the form of a mixture of Markovchains and derive the corresponding training algorithm. Our evaluations show significant detection results on an array of real world web layer attacks, comparing favorably against other AD approaches.",
"title": ""
},
{
"docid": "ecd67367aed0f3f7e3218cdec8a392b4",
"text": "OBJECTIVE\nTo investigate the efficacy of home-based specific stabilizing exercises focusing on the local stabilizing muscles as the only intervention in the treatment of persistent postpartum pelvic girdle pain.\n\n\nDESIGN\nA prospective, randomized, single-blinded, clinically controlled study.\n\n\nSUBJECTS\nEighty-eight women with pelvic girdle pain were recruited 3 months after delivery.\n\n\nMETHODS\nThe treatment consisted of specific stabilizing exercises targeting the local trunk muscles. The reference group had a single telephone contact with a physiotherapist. Primary outcome was disability measured with Oswestry Disability Index. Secondary outcomes were pain, health-related quality of life (EQ-5D), symptom satisfaction, and muscle function.\n\n\nRESULTS\nNo significant differences between groups could be found at 3- or 6-month follow-up regarding primary outcome in disability. Within-group comparisons showed some improvement in both groups in terms of disability, pain, symptom satisfaction and muscle function compared with baseline, although the majority still experienced pelvic girdle pain.\n\n\nCONCLUSION\nTreatment with this home-training concept of specific stabilizing exercises targeting the local muscles was no more effective in improving consequences of persistent postpartum pelvic girdle pain than the clinically natural course. Regardless of whether treatment with specific stabilizing exercises was carried out, the majority of women still experienced some back pain almost one year after pregnancy.",
"title": ""
},
{
"docid": "b4c12965618d7d3a8049a91b513ca896",
"text": "There is a convergence in recent theories of creativity that go beyond characteristics and cognitive processes of individuals to recognize the importance of the social construction of creativity. In parallel, there has been a rise in social computing supporting the collaborative construction of knowledge. The panel will discuss the challenges and opportunities from the confluence of these two developments by bringing together the contrasting and controversial perspective of the individual panel members. It will synthesize from different perspectives an analytic framework to understand these new developments, and how to promote rigorous research methods and how to identify the unique challenges in developing evaluation and assessment methods for creativity research.",
"title": ""
},
{
"docid": "91e38df08894f59e134f83ae532b09e7",
"text": "Many functional network properties of the human brain have been identified during rest and task states, yet it remains unclear how the two relate. We identified a whole-brain network architecture present across dozens of task states that was highly similar to the resting-state network architecture. The most frequent functional connectivity strengths across tasks closely matched the strengths observed at rest, suggesting this is an \"intrinsic,\" standard architecture of functional brain organization. Furthermore, a set of small but consistent changes common across tasks suggests the existence of a task-general network architecture distinguishing task states from rest. These results indicate the brain's functional network architecture during task performance is shaped primarily by an intrinsic network architecture that is also present during rest, and secondarily by evoked task-general and task-specific network changes. This establishes a strong relationship between resting-state functional connectivity and task-evoked functional connectivity-areas of neuroscientific inquiry typically considered separately.",
"title": ""
},
{
"docid": "d9eed063ea6399a8f33c6cbda3a55a62",
"text": "Current and future (conventional) notations used in Conceptual Modeling Techniques should have a precise (formal) semantics to provide a well-defined software development process, in order to go from specification to implementation in an automated way. To achieve this objective, the OO-Method approach to Information Systems Modeling presented in this paper attempts to overcome the conventional (informal)/formal dichotomy by selecting the best ideas from both approaches. The OO-Method makes a clear distinction between the problem space (centered on what the system is) and the solution space (centered on how it is implemented as a software product). It provides a precise, conventional graphical notation to obtain a system description at the problem space level, however this notation is strictly based on a formal OO specification language that determines the conceptual modeling constructs needed to obtain the system specification. An abstract execution model determines how to obtain the software representations corresponding to these conceptual modeling constructs. In this way, the final software product can be obtained in an automated way. r 2001 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "e8167685fcbcea1a4c6a825e50eb45d2",
"text": "Statistical methods have been widely employed to study the fundamental properties of language. In recent years, methods from complex and dynamical systems proved useful to create several language models. Despite the large amount of studies devoted to represent texts with physical models, only a limited number of studies have shown how the properties of the underlying physical systems can be employed to improve the performance of natural language processing tasks. In this paper, I address this problem by devising complex networks methods that are able to improve the performance of current statistical methods. Using a fuzzy classification strategy, I show that the topological properties extracted from texts complement the traditional textual description. In several cases, the performance obtained with hybrid approaches outperformed the results obtained when only traditional or networked methods were used. Because the proposed model is generic, the framework devised here could be straightforwardly used to study similar textual applications where the topology plays a pivotal role in the description of the interacting agents.",
"title": ""
},
{
"docid": "bb8d6adec85cbfd773051052d1051860",
"text": "Permutation methods can provide exact control of false positives and allow the use of non-standard statistics, making only weak assumptions about the data. With the availability of fast and inexpensive computing, their main limitation would be some lack of flexibility to work with arbitrary experimental designs. In this paper we report on results on approximate permutation methods that are more flexible with respect to the experimental design and nuisance variables, and conduct detailed simulations to identify the best method for settings that are typical for imaging research scenarios. We present a generic framework for permutation inference for complex general linear models (GLMS) when the errors are exchangeable and/or have a symmetric distribution, and show that, even in the presence of nuisance effects, these permutation inferences are powerful while providing excellent control of false positives in a wide range of common and relevant imaging research scenarios. We also demonstrate how the inference on GLM parameters, originally intended for independent data, can be used in certain special but useful cases in which independence is violated. Detailed examples of common neuroimaging applications are provided, as well as a complete algorithm - the \"randomise\" algorithm - for permutation inference with the GLM.",
"title": ""
},
{
"docid": "56b706edc6d1b6a2ff64770cb3f79c2e",
"text": "The ancient oriental game of Go has long been considered a grand challenge for artificial intelligence. For decades, computer Go has defied the classical methods in game tree search that worked so successfully for chess and checkers. However, recent play in computer Go has been transformed by a new paradigm for tree search based on Monte-Carlo methods. Programs based on Monte-Carlo tree search now play at human-master levels and are beginning to challenge top professional players. In this paper, we describe the leading algorithms for Monte-Carlo tree search and explain how they have advanced the state of the art in computer Go.",
"title": ""
},
{
"docid": "be7ad6ff14910b8198b1e94003418989",
"text": "An important ability of a robot that interacts with the environment and manipulates objects is to deal with the uncertainty in sensory data. Sensory information is necessary to, for example, perform online assessment of grasp stability. We present methods to assess grasp stability based on haptic data and machine-learning methods, including AdaBoost, support vector machines (SVMs), and hidden Markov models (HMMs). In particular, we study the effect of different sensory streams to grasp stability. This includes object information such as shape; grasp information such as approach vector; tactile measurements from fingertips; and joint configuration of the hand. Sensory knowledge affects the success of the grasping process both in the planning stage (before a grasp is executed) and during the execution of the grasp (closed-loop online control). In this paper, we study both of these aspects. We propose a probabilistic learning framework to assess grasp stability and demonstrate that knowledge about grasp stability can be inferred using information from tactile sensors. Experiments on both simulated and real data are shown. The results indicate that the idea to exploit the learning approach is applicable in realistic scenarios, which opens a number of interesting venues for the future research.",
"title": ""
},
{
"docid": "5d6e1a7dfa5bc4cc1332d225342a01f7",
"text": "Hashing seeks an embedding of high-dimensional objects into a similarity-preserving low-dimensional Hamming space such that similar objects are indexed by binary codes with small Hamming distances. A variety of hashing methods have been developed, but most of them resort to a single view (representation) of data. However, objects are often described by multiple representations. For instance, images are described by a few different visual descriptors (such as SIFT, GIST, and HOG), so it is desirable to incorporate multiple representations into hashing, leading to multi-view hashing. In this paper we present a deep network for multi-view hashing, referred to as deep multi-view hashing, where each layer of hidden nodes is composed of view-specific and shared hidden nodes, in order to learn individual and shared hidden spaces from multiple views of data. Numerical experiments on image datasets demonstrate the useful behavior of our deep multi-view hashing (DMVH), compared to recently-proposed multi-modal deep network as well as existing shallow models of hashing.",
"title": ""
},
{
"docid": "7d4fa882673f142c4faa8a4ff3c2a205",
"text": "This paper presents a different perspective on diversity in search results: diversity by proportionality. We consider a result list most diverse, with respect to some set of topics related to the query, when the number of documents it provides on each topic is proportional to the topic's popularity. Consequently, we propose a framework for optimizing proportionality for search result diversification, which is motivated by the problem of assigning seats to members of competing political parties. Our technique iteratively determines, for each position in the result ranked list, the topic that best maintains the overall proportionality. It then selects the best document on this topic for this position. We demonstrate empirically that our method significantly outperforms the top performing approach in the literature not only on our proposed metric for proportionality, but also on several standard diversity measures. This result indicates that promoting proportionality naturally leads to minimal redundancy, which is a goal of the current diversity approaches.",
"title": ""
},
{
"docid": "f2379daa6c569d797fd000de7e42cae9",
"text": "Critical infrastructure components nowadays use microprocessor-based embedded control systems. It is often infeasible, however, to employ the same level of security measures used in general purpose computing systems, due to the stringent performance and resource constraints of embedded control systems. Furthermore, as software sits atop and relies on the firmware for proper operation, software-level techniques cannot detect malicious behavior of the firmware. In this work, we propose ConFirm, a low-cost technique to detect malicious modifications in the firmware of embedded control systems by measuring the number of low-level hardware events that occur during the execution of the firmware. In order to count these events, ConFirm leverages the Hardware Performance Counters (HPCs), which readily exist in many embedded processors. We evaluate the detection capability and performance overhead of the proposed technique on various types of firmware running on ARM- and PowerPC-based embedded processors. Experimental results demonstrate that ConFirm can detect all the tested modifications with low performance overhead.",
"title": ""
},
{
"docid": "da5c1445453853e23477bfea79fd4605",
"text": "This paper presents an 8-bit column-driver IC with improved deviation of voltage output (DVO) for thin-film-transistor (TFT) liquid crystal displays (LCDs). The various DVO results contributed by the output buffer of a column driver are predicted by using Monte Carlo simulation under different variation conditions. Relying on this prediction, a better compromise can be achieved between DVO and chip size. This work was implemented using 0.35-μm CMOS technology and the measured maximum DVO is only 6.2 mV.",
"title": ""
}
] | scidocsrr |
b490516e04fd2917c9498057d4e20ff7 | Architectures for deep neural network based acoustic models defined over windowed speech waveforms | [
{
"docid": "d12a47e1b72532a3c2c028620eba44d6",
"text": "Mel-filter banks are commonly used in speech recognition, as they are motivated from theory related to speech production and perception. While features derived from mel-filter banks are quite popular, we argue that this filter bank is not really an appropriate choice as it is not learned for the objective at hand, i.e. speech recognition. In this paper, we explore replacing the filter bank with a filter bank layer that is learned jointly with the rest of a deep neural network. Thus, the filter bank is learned to minimize cross-entropy, which is more closely tied to the speech recognition objective. On a 50-hour English Broadcast News task, we show that we can achieve a 5% relative improvement in word error rate (WER) using the filter bank learning approach, compared to having a fixed set of filters.",
"title": ""
}
] | [
{
"docid": "68118c94d8e00031a7c9996ab282881f",
"text": "A cascadable power-on-reset (POR) delay element consuming nanowatt of peak power was developed to be used in very compact power-on-reset pulse generator (POR-PG) circuits. Operation principles and features of the POR delay element were presented in this paper. The delay element was designed, and fabricated in a 0.5µm 2P3M CMOS process. It was determined from simulation as well as measurement results that the delay element works wide supply voltage ranges between 1.8 volt and 5 volt and supply voltage rise times between 100nsec and 1msec allowing wide dynamic range POR-PG circuits. It also has very small silicon footprint. Layout size of a single POR delay element was 35µm x 55µm in 0.5µm CMOS process.",
"title": ""
},
{
"docid": "f4cf5ac351005975bc8244497a45bc70",
"text": "This paper demonstrates the co-optimization of all critical device parameters of perpendicular magnetic tunnel junctions (pMTJ) in 1 Gbit arrays with an equivalent bitcell size of 22 F2 at the 28 nm logic node for embedded STT-MRAM. Through thin-film tuning and advanced etching of sub-50 nm (diameter) pMTJ, high device performance and reliability were achieved simultaneously, including TMR = 150 %, Hc > 1350 Oe, Heff <; 100 Oe, Δ = 85, Ic (35 ns) = 94 μA, Vbreakdown = 1.5 V, and high endurance (> 1012 write cycles). Reliable switching with small temporal variations (<; 5 %) was obtained down to 10 ns. In addition, tunnel barrier integrity and high temperature device characteristics were investigated in order to ensure reliable STT-MRAM operation.",
"title": ""
},
{
"docid": "596949afaabdbcc68cd8bda175400f30",
"text": "We propose improved Deep Neural Network (DNN) training loss functions for more accurate single keyword spotting on resource-constrained embedded devices. The loss function modifications consist of a combination of multi-task training and weighted cross entropy. In the multi-task architecture, the keyword DNN acoustic model is trained with two tasks in parallel the main task of predicting the keyword-specific phone states, and an auxiliary task of predicting LVCSR senones. We show that multi-task learning leads to comparable accuracy over a previously proposed transfer learning approach where the keyword DNN training is initialized by an LVCSR DNN of the same input and hidden layer sizes. The combination of LVCSRinitialization and Multi-task training gives improved keyword detection accuracy compared to either technique alone. We also propose modifying the loss function to give a higher weight on input frames corresponding to keyword phone targets, with a motivation to balance the keyword and background training data. We show that weighted cross-entropy results in additional accuracy improvements. Finally, we show that the combination of 3 techniques LVCSR-initialization, multi-task training and weighted cross-entropy gives the best results, with significantly lower False Alarm Rate than the LVCSR-initialization technique alone, across a wide range of Miss Rates.",
"title": ""
},
{
"docid": "308e06ce00b1dfaf731b1a91e7c56836",
"text": "OBJECTIVE\nTo systematically review the literature regarding how statistical process control--with control charts as a core tool--has been applied to healthcare quality improvement, and to examine the benefits, limitations, barriers and facilitating factors related to such application.\n\n\nDATA SOURCES\nOriginal articles found in relevant databases, including Web of Science and Medline, covering the period 1966 to June 2004.\n\n\nSTUDY SELECTION\nFrom 311 articles, 57 empirical studies, published between 1990 and 2004, met the inclusion criteria.\n\n\nMETHODS\nA standardised data abstraction form was used for extracting data relevant to the review questions, and the data were analysed thematically.\n\n\nRESULTS\nStatistical process control was applied in a wide range of settings and specialties, at diverse levels of organisation and directly by patients, using 97 different variables. The review revealed 12 categories of benefits, 6 categories of limitations, 10 categories of barriers, and 23 factors that facilitate its application and all are fully referenced in this report. Statistical process control helped different actors manage change and improve healthcare processes. It also enabled patients with, for example asthma or diabetes mellitus, to manage their own health, and thus has therapeutic qualities. Its power hinges on correct and smart application, which is not necessarily a trivial task. This review catalogs 11 approaches to such smart application, including risk adjustment and data stratification.\n\n\nCONCLUSION\nStatistical process control is a versatile tool which can help diverse stakeholders to manage change in healthcare and improve patients' health.",
"title": ""
},
{
"docid": "232a9a83cea93e5d8cdfb6eff0c1c256",
"text": "We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal. Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions. Source code and models are available at https://imatge-upc.github.io/detection-2016-nipsws/.",
"title": ""
},
{
"docid": "eec15a5d14082d625824452bd070ec38",
"text": "Food waste is a major environmental issue. Expired products are thrown away, implying that too much food is ordered compared to what is sold and that a more accurate prediction model is required within grocery stores. In this study the two prediction models Long Short-Term Memory (LSTM) and Autoregressive Integrated Moving Average (ARIMA) were compared on their prediction accuracy in two scenarios, given sales data for different products, to observe if LSTM is a model that can compete against the ARIMA model in the field of sales forecasting in retail. In the first scenario the models predict sales for one day ahead using given data, while they in the second scenario predict each day for a week ahead. Using the evaluation measures RMSE and MAE together with a t-test the results show that the difference between the LSTM and ARIMA model is not of statistical significance in the scenario of predicting one day ahead. However when predicting seven days ahead, the results show that there is a statistical significance in the difference indicating that the LSTM model has higher accuracy. This study therefore concludes that the LSTM model is promising in the field of sales forecasting in retail and able to compete against the ARIMA model.",
"title": ""
},
{
"docid": "0f111ec5556abf9bfbfcaeefaab61da1",
"text": "The rise of Natural Language Processing (NLP) opened new possibilities for various applications that were not applicable before. A morphological-rich language such as Arabic introduces a set of features, such as roots, that would assist the progress of NLP. Many tools were developed to capture the process of root extraction (stemming). Stemmers have improved many NLP tasks without explicit knowledge about its stemming accuracy. In this paper, a study is conducted to evaluate various Arabic stemmers. The study is done as a series of comparisons using a manually annotated dataset, which shows the efficiency of Arabic stemmers, and points out potential improvements to existing stemmers. The paper also presents enhanced root extractors by using light stemmers as a preprocessing phase.",
"title": ""
},
{
"docid": "e458ba119fe15f17aa658c5b42a21e2b",
"text": "In this paper, with the help of controllable active near-infrared (NIR) lights, we construct near-infrared differential (NIRD) images. Based on reflection model, NIRD image is believed to contain the lighting difference between images with and without active NIR lights. Two main characteristics based on NIRD images are exploited to conduct spoofing detection. Firstly, there exist obviously spoofing media around the faces in most conditions, which reflect incident lights in almost the same way as the face areas do. We analyze the pixel consistency between face and non-face areas and employ context clues to distinguish the spoofing images. Then, lighting feature, extracted only from face areas, is utilized to detect spoofing attacks of deliberately cropped medium. Merging the two features, we present a face spoofing detection system. In several experiments on self collected datasets with different spoofing media, we demonstrate the excellent results and robustness of proposed method.",
"title": ""
},
{
"docid": "0efecea75d3821a5710f3de91986f119",
"text": "Atherosclerosis is a chronic inflammatory disease, and is the primary cause of heart disease and stroke in Western countries. Derivatives of cannabinoids such as delta-9-tetrahydrocannabinol (THC) modulate immune functions and therefore have potential for the treatment of inflammatory diseases. We investigated the effects of THC in a murine model of established atherosclerosis. Oral administration of THC (1 mg kg-1 per day) resulted in significant inhibition of disease progression. This effective dose is lower than the dose usually associated with psychotropic effects of THC. Furthermore, we detected the CB2 receptor (the main cannabinoid receptor expressed on immune cells) in both human and mouse atherosclerotic plaques. Lymphoid cells isolated from THC-treated mice showed diminished proliferation capacity and decreased interferon-γ secretion. Macrophage chemotaxis, which is a crucial step for the development of atherosclerosis, was also inhibited in vitro by THC. All these effects were completely blocked by a specific CB2 receptor antagonist. Our data demonstrate that oral treatment with a low dose of THC inhibits atherosclerosis progression in the apolipoprotein E knockout mouse model, through pleiotropic immunomodulatory effects on lymphoid and myeloid cells. Thus, THC or cannabinoids with activity at the CB2 receptor may be valuable targets for treating atherosclerosis.",
"title": ""
},
{
"docid": "00ec0bc711e38e6e5a3281dbd71d02f9",
"text": "The magnitude of recent combat blast injuries sustained by forces fighting in Afghanistan has escalated to new levels with more troops surviving higher-energy trauma. The most complex and challenging injury pattern is the emerging frequency of high-energy IED casualties presenting in extremis with traumatic bilateral lower extremity amputations with and without pelvic and perineal blast involvement. These patients require a coordinated effort of advanced trauma and surgical care from the point of injury through definitive management. Early survival is predicated upon a balance of life-saving damage control surgery and haemostatic resuscitation. Emergent operative intervention is critical with timely surgical hemostasis, adequate wound decontamination, revision amputations, and pelvic fracture stabilization. Efficient index surgical management is paramount to prevent further physiologic insult, and a team of orthopaedic and general surgeons operating concurrently may effectively achieve this. Despite the extent and complexity, these are survivable injuries but long-term followup is necessary.",
"title": ""
},
{
"docid": "3e177f8b02a5d67c7f4d93ce601c4539",
"text": "This research proposes an approach for text classification that uses a simple neural network called Dynamic Text Classifier Neural Network (DTCNN). The neural network uses as input vectors of words with variable dimension without information loss called Dynamic Token Vectors (DTV). The proposed neural network is designed for the classification of large and short text into categories. The learning process combines competitive and Hebbian learning. Due to the combination of these learning rules the neural network is able to work in a supervised or semi-supervised mode. In addition, it provides transparency in the classification. The network used in this paper is quite simple, and that is what makes enough for its task. The results of evaluation the proposed method shows an improvement in the text classification problem using the DTCNN compared to baseline approaches.",
"title": ""
},
{
"docid": "9de4cfbd662dc9ba2621722b7aef7bac",
"text": "The centromere position is an important feature in analyzing chromosomes and to make karyogram. In the field of chromosome analysis the accurate determination centromere from the segmented chromosome image is a challenging task. Karyogram is an arrangement of 46 chromosomes, for finding out many genetic disorders, various abnormalities and cancers. There exist so many algorithms to detect centromere positions, but most of the algorithms cannot apply for all chromosomes because of their orientation in metaphase. Here we propose a novel algorithm that associates with some rules based on morphological features of chromosome, a GLM mask and rotation procedure. The algorithm is tested on publically available database (LK1) and images collected from RCC Trivandrum.",
"title": ""
},
{
"docid": "fdab4af34adebd0d682134f3cf13d794",
"text": "Threat evaluation (TE) is a process used to assess the threat values (TVs) of air-breathing threats (ABTs), such as air fighters, that are approaching defended assets (DAs). This study proposes an automatic method for conducting TE using radar information when ABTs infiltrate into territory where DAs are located. The method consists of target asset (TA) prediction and TE. We divide a friendly territory into discrete cells based on the effective range of anti-aircraft missiles. The TA prediction identifies the TA of each ABT by predicting the ABT’s movement through cells in the territory via a Markov chain, and the cell transition is modeled by neural networks. We calculate the TVs of the ABTs based on the TA prediction results. A simulation-based experiment revealed that the proposed method outperformed TE based on the closest point of approach or the radial speed vector methods. © 2016 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "4e7443088eedf5e6199959a06ebc420c",
"text": "The development of computational-intelligence based strategies for electronic markets has been the focus of intense research. In order to be able to design efficient and effective automated trading strategies, one first needs to understand the workings of the market, the strategies that traders use and their interactions as well as the patterns emerging as a result of these interactions. In this paper, we develop an agent-based model of the FX market which is the market for the buying and selling of currencies. Our agent-based model of the FX market (ABFXM) comprises heterogeneous trading agents which employ a strategy that identifies and responds to periodic patterns in the price time series. We use the ABFXM to undertake a systematic exploration of its constituent elements and their impact on the stylized facts (statistical patterns) of transactions data. This enables us to identify a set of sufficient conditions which result in the emergence of the stylized facts similarly to the real market data, and formulate a model which closely approximates the stylized facts. We use a unique high frequency dataset of historical transactions data which enables us to run multiple simulation runs and validate our approach and draw comparisons and conclusions for each market setting.",
"title": ""
},
{
"docid": "7115c9872b05a20efeaafaaed7c2e173",
"text": "Today, bibliographic digital libraries play an important role in helping members of academic community search for novel research. In particular, author disambiguation for citations is a major problem during the data integration and cleaning process, since author names are usually very ambiguous. For solving this problem, we proposed two kinds of correlations between citations, namely, Topic Correlation and Web Correlation, to exploit relationships between citations, in order to identify whether two citations with the same author name refer to the same individual. The topic correlation measures the similarity between research topics of two citations; while the Web correlation measures the number of co-occurrence in web pages. We employ a pair-wise grouping algorithm to group citations into clusters. The results of experiments show that the disambiguation accuracy has great improvement when using topic correlation and Web correlation, and Web correlation provides stronger evidences about the authors of citations.",
"title": ""
},
{
"docid": "785b42fe7765d415dcfef09a6142aa6f",
"text": "In this paper a first approach for digital media forensics is presented to determine the used microphones and the environments of recorded digital audio samples by using known audio steganalysis features. Our first evaluation is based on a limited exemplary test set of 10 different audio reference signals recorded as mono audio data by four microphones in 10 different rooms with 44.1 kHz sampling rate and 16 bit quantisation. Note that, of course, a generalisation of the results cannot be achieved. Motivated by the syntactical and semantical analysis of information and in particular by known audio steganalysis approaches, a first set of specific features are selected for classification to evaluate, whether this first feature set can support correct classifications. The idea was mainly driven by the existing steganalysis features and the question of applicability within a first and limited test set. In the tests presented in this paper, an inter-device analysis with different device characteristics is performed while intra-device evaluations (identical microphone models of the same manufacturer) are not considered. For classification the data mining tool WEKA with K-means as a clustering and Naive Bayes as a classification technique are applied with the goal to evaluate their classification in regard to the classification accuracy on known audio steganalysis features. Our results show, that for our test set, the used classification techniques and selected steganalysis features, microphones can be better classified than environments. These first tests show promising results but of course are based on a limited test and training set as well a specific test set generation. Therefore additional and enhanced features with different test set generation strategies are necessary to generalise the findings.",
"title": ""
},
{
"docid": "4551c05bbf8969d310d548d5a773f584",
"text": "Optical testing of advanced CMOS circuits successfully exploits the near-infrared photon emission by hot-carriers in transistor channels (see EMMI (Ng et al., 1999) and PICA (Kash and Tsang, 1997) (Song et al., 2005) techniques). However, due to the continuous scaling of features size and supply voltage, spontaneous emission is becoming fainter and optical circuit diagnostics becomes more challenging. Here we present the experimental characterization of hot-carrier luminescence emitted by transistors in four CMOS technologies from two different manufacturers. Aim of the research is to gain a better perspective on emission trends and dependences on technological parameters. In particular, we identify luminescence changes due to short-channel effects (SCE) and we ascertain that, for each technology node, there are two operating regions, for short- and long-channels. We highlight the emission reduction of p-FETs compared to n-FETs, due to a \"red-shift\" (lower energy) of the hot-carrier distribution. Eventually, we give perspectives about emission trends in actual and future technology nodes, showing that luminescence dramatically decreases with voltage, but it recovers strength when moving from older to more advanced technology generations. Such results extend the applicability of optical testing techniques, based on present single-photon detectors, to future low-voltage chips",
"title": ""
},
{
"docid": "c76f44cd62651b068de9bdb5eec80f23",
"text": "Currently, audience measurement reports of television programs are only available after a significant period of time, for example as a daily report. This paper proposes an architecture for real time measurement of television audience. Real time measurement can give channel owners and advertisers important information that can positively impact their business. We show that television viewership can be captured by set top box devices which detect the channel logo and transmit the viewership data to a server over internet. The server processes the viewership data and displays it in real time on a web-based dashboard. In addition, it has facility to display charts of hourly and location-wise viewership trends and online TRP (Television Rating Points) reports. The server infrastructure consists of in-memory database, reporting and charting libraries and J2EE based application server.",
"title": ""
},
{
"docid": "a1f4b4c6e98e6b5e8b7f939318a5e808",
"text": "A new hardware scheme for computing the transition and control matrix of a parallel cyclic redundancy checksum is proposed. This opens possibilities for parallel high-speed cyclic redundancy checksum circuits that reconfigure very rapidly to new polynomials. The area requirements are lower than those for a realization storing a precomputed matrix. An additional simplification arises as only the polynomial needs to be supplied. The derived equations allow the width of the data to be processed in parallel to be selected independently of the degree of the polynomial. The new design has been simulated and outperforms a recently proposed architecture significantly in speed, area, and energy efficiency.",
"title": ""
},
{
"docid": "77f83ada0854e34ac60c725c21671434",
"text": "OBJECTIVES\nThis subanalysis of the TNT (Treating to New Targets) study investigates the effects of intensive lipid lowering with atorvastatin in patients with coronary heart disease (CHD) with and without pre-existing chronic kidney disease (CKD).\n\n\nBACKGROUND\nCardiovascular disease is a major cause of morbidity and mortality in patients with CKD.\n\n\nMETHODS\nA total of 10,001 patients with CHD were randomized to double-blind therapy with atorvastatin 80 mg/day or 10 mg/day. Patients with CKD were identified at baseline on the basis of an estimated glomerular filtration rate (eGFR) <60 ml/min/1.73 m(2) using the Modification of Diet in Renal Disease equation. The primary efficacy outcome was time to first major cardiovascular event.\n\n\nRESULTS\nOf 9,656 patients with complete renal data, 3,107 had CKD at baseline and demonstrated greater cardiovascular comorbidity than those with normal eGFR (n = 6,549). After a median follow-up of 5.0 years, 351 patients with CKD (11.3%) experienced a major cardiovascular event, compared with 561 patients with normal eGFR (8.6%) (hazard ratio [HR] = 1.35; 95% confidence interval [CI] 1.18 to 1.54; p < 0.0001). Compared with atorvastatin 10 mg, atorvastatin 80 mg reduced the relative risk of major cardiovascular events by 32% in patients with CKD (HR = 0.68; 95% CI 0.55 to 0.84; p = 0.0003) and 15% in patients with normal eGFR (HR = 0.85; 95% CI 0.72 to 1.00; p = 0.049). Both doses of atorvastatin were well tolerated in patients with CKD.\n\n\nCONCLUSIONS\nAggressive lipid lowering with atorvastatin 80 mg was both safe and effective in reducing the excess of cardiovascular events in a high-risk population with CKD and CHD.",
"title": ""
}
] | scidocsrr |
8626d44237740695b8dd963290f7f0b9 | Influence Maximization Across Partially Aligned Heterogenous Social Networks | [
{
"docid": "b9daa134744b8db757fc0857f479bd70",
"text": "Influence is a complex and subtle force that governs the dynamics of social networks as well as the behaviors of involved users. Understanding influence can benefit various applications such as viral marketing, recommendation, and information retrieval. However, most existing works on social influence analysis have focused on verifying the existence of social influence. Few works systematically investigate how to mine the strength of direct and indirect influence between nodes in heterogeneous networks.\n To address the problem, we propose a generative graphical model which utilizes the heterogeneous link information and the textual content associated with each node in the network to mine topic-level direct influence. Based on the learned direct influence, a topic-level influence propagation and aggregation algorithm is proposed to derive the indirect influence between nodes. We further study how the discovered topic-level influence can help the prediction of user behaviors. We validate the approach on three different genres of data sets: Twitter, Digg, and citation networks. Qualitatively, our approach can discover interesting influence patterns in heterogeneous networks. Quantitatively, the learned topic-level influence can greatly improve the accuracy of user behavior prediction.",
"title": ""
},
{
"docid": "ee25e4acd98193e7dc3f89f3f98e42e0",
"text": "Kempe et al. [4] (KKT) showed the problem of influence maximization is NP-hard and a simple greedy algorithm guarantees the best possible approximation factor in PTIME. However, it has two major sources of inefficiency. First, finding the expected spread of a node set is #P-hard. Second, the basic greedy algorithm is quadratic in the number of nodes. The first source is tackled by estimating the spread using Monte Carlo simulation or by using heuristics[4, 6, 2, 5, 1, 3]. Leskovec et al. proposed the CELF algorithm for tackling the second. In this work, we propose CELF++ and empirically show that it is 35-55% faster than CELF.",
"title": ""
}
] | [
{
"docid": "e795381a345bf3cab74ddfd4d4763c1e",
"text": "Context: Recent research discusses the use of ontologies, dictionaries and thesaurus as a means to improve activity labels of process models. However, the trade-off between quality improvement and extra effort is still an open question. It is suspected that ontology-based support could require additional effort for the modeler. Objective: In this paper, we investigate to which degree ontology-based support potentially increases the effort of modeling. We develop a theoretical perspective grounded in cognitive psychology, which leads us to the definition of three design principles for appropriate ontology-based support. The objective is to evaluate the design principles through empirical experimentation. Method: We tested the effect of presenting relevant content from the ontology to the modeler by means of a quantitative analysis. We performed controlled experiments using a prototype, which generates a simplified and context-aware visual representation of the ontology. It logs every action of the process modeler for analysis. The experiment refers to novice modelers and was performed as between-subject design with vs. without ontology-based support. It was carried out with two different samples. Results: Part of the effort-related variables we measured showed significant statistical difference between the group with and without ontology-based support. Overall, for the collected data, the ontology support achieved good results. Conclusion: We conclude that it is feasible to provide ontology-based support to the modeler in order to improve process modeling without strongly compromising time consumption and cognitive effort.",
"title": ""
},
{
"docid": "c10a58037c4b13953236831af304e660",
"text": "A 32 nm generation logic technology is described incorporating 2nd-generation high-k + metal-gate technology, 193 nm immersion lithography for critical patterning layers, and enhanced channel strain techniques. The transistors feature 9 Aring EOT high-k gate dielectric, dual band-edge workfunction metal gates, and 4th-generation strained silicon, resulting in the highest drive currents yet reported for NMOS and PMOS. Process yield, performance and reliability are demonstrated on a 291 Mbit SRAM test vehicle, with 0.171 mum2 cell size, containing >1.9 billion transistors.",
"title": ""
},
{
"docid": "d90add899632bab1c5c2637c7080f717",
"text": "Software Testing plays a important role in Software development because it can minimize the development cost. We Propose a Technique for Test Sequence Generation using UML Model Sequence Diagram.UML models give a lot of information that should not be ignored in testing. In This paper main features extract from Sequence Diagram after that we can write the Java Source code for that Features According to ModelJunit Library. ModelJUnit is a extended library of JUnit Library. By using that Source code we can Generate Test Case Automatic and Test Coverage. This paper describes a systematic Test Case Generation Technique performed on model based testing (MBT) approaches By Using Sequence Diagram.",
"title": ""
},
{
"docid": "ef77d042a04b7fa704f13a0fa5e73688",
"text": "The nature of the cellular basis of learning and memory remains an often-discussed, but elusive problem in neurobiology. A popular model for the physiological mechanisms underlying learning and memory postulates that memories are stored by alterations in the strength of neuronal connections within the appropriate neural circuitry. Thus, an understanding of the cellular and molecular basis of synaptic plasticity will expand our knowledge of the molecular basis of learning and memory. The view that learning was the result of altered synaptic weights was first proposed by Ramon y Cajal in 1911 and formalized by Donald O. Hebb. In 1949, Hebb proposed his \" learning rule, \" which suggested that alterations in the strength of synapses would occur between two neurons when those neurons were active simultaneously (1). Hebb's original postulate focused on the need for synaptic activity to lead to the generation of action potentials in the postsynaptic neuron, although more recent work has extended this to include local depolarization at the synapse. One problem with testing this hypothesis is that it has been difficult to record directly the activity of single synapses in a behaving animal. Thus, the challenge in the field has been to relate changes in synaptic efficacy to specific behavioral instances of associative learning. In this chapter, we will review the relationship among synaptic plasticity, learning, and memory. We will examine the extent to which various current models of neuronal plasticity provide potential bases for memory storage and we will explore some of the signal transduction pathways that are critically important for long-term memory storage. We will focus on two systems—the gill and siphon withdrawal reflex of the invertebrate Aplysia californica and the mammalian hippocam-pus—and discuss the abilities of models of synaptic plasticity and learning to account for a range of genetic, pharmacological, and behavioral data.",
"title": ""
},
{
"docid": "d51408ad40bdc9a3a846aaf7da907cef",
"text": "Accessing online information from various data sources has become a necessary part of our everyday life. Unfortunately such information is not always trustworthy, as different sources are of very different qualities and often provide inaccurate and conflicting information. Existing approaches attack this problem using unsupervised learning methods, and try to infer the confidence of the data value and trustworthiness of each source from each other by assuming values provided by more sources are more accurate. However, because false values can be widespread through copying among different sources and out-of-date data often overwhelm up-to-date data, such bootstrapping methods are often ineffective.\n In this paper we propose a semi-supervised approach that finds true values with the help of ground truth data. Such ground truth data, even in very small amount, can greatly help us identify trustworthy data sources. Unlike existing studies that only provide iterative algorithms, we derive the optimal solution to our problem and provide an iterative algorithm that converges to it. Experiments show our method achieves higher accuracy than existing approaches, and it can be applied on very huge data sets when implemented with MapReduce.",
"title": ""
},
{
"docid": "bea412d20a95c853fe06e7640acb9158",
"text": "We propose a novel approach to synthesizing images that are effective for training object detectors. Starting from a small set of real images, our algorithm estimates the rendering parameters required to synthesize similar images given a coarse 3D model of the target object. These parameters can then be reused to generate an unlimited number of training images of the object of interest in arbitrary 3D poses, which can then be used to increase classification performances. A key insight of our approach is that the synthetically generated images should be similar to real images, not in terms of image quality, but rather in terms of features used during the detector training. We show in the context of drone, plane, and car detection that using such synthetically generated images yields significantly better performances than simply perturbing real images or even synthesizing images in such way that they look very realistic, as is often done when only limited amounts of training data are available. 2015 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "169db6ecec2243e3566079cd473c7afe",
"text": "Aspect-level sentiment classification is a finegrained task in sentiment analysis. Since it provides more complete and in-depth results, aspect-level sentiment analysis has received much attention these years. In this paper, we reveal that the sentiment polarity of a sentence is not only determined by the content but is also highly related to the concerned aspect. For instance, “The appetizers are ok, but the service is slow.”, for aspect taste, the polarity is positive while for service, the polarity is negative. Therefore, it is worthwhile to explore the connection between an aspect and the content of a sentence. To this end, we propose an Attention-based Long Short-Term Memory Network for aspect-level sentiment classification. The attention mechanism can concentrate on different parts of a sentence when different aspects are taken as input. We experiment on the SemEval 2014 dataset and results show that our model achieves state-ofthe-art performance on aspect-level sentiment classification.",
"title": ""
},
{
"docid": "cdd27bbcbab81a243dda6bb855fb8f72",
"text": "The Internet of Things (IoT), which can be regarded as an enhanced version of machine-to-machine communication technology, was proposed to realize intelligent thing-to-thing communications by utilizing the Internet connectivity. In the IoT, \"things\" are generally heterogeneous and resource constrained. In addition, such things are connected to each other over low-power and lossy networks. In this paper, we propose an inter-device authentication and session-key distribution system for devices with only encryption modules. In the proposed system, unlike existing sensor-network environments where the key distribution center distributes the key, each sensor node is involved with the generation of session keys. In addition, in the proposed scheme, the performance is improved so that the authenticated device can calculate the session key in advance. The proposed mutual authentication and session-key distribution system can withstand replay attacks, man-in-the-middle attacks, and wiretapped secret-key attacks.",
"title": ""
},
{
"docid": "2bf48ea6d0fd3bd4776dc0a90e89254b",
"text": "OBJECTIVES\nTo test whether individual differences in gratitude are related to sleep after controlling for neuroticism and other traits. To test whether pre-sleep cognitions are the mechanism underlying this relationship.\n\n\nMETHOD\nA cross-sectional questionnaire study was conducted with a large (186 males, 215 females) community sample (ages=18-68 years, mean=24.89, S.D.=9.02), including 161 people (40%) scoring above 5 on the Pittsburgh Sleep Quality Index, indicating clinically impaired sleep. Measures included gratitude, the Pittsburgh Sleep Quality Index (PSQI), self-statement test of pre-sleep cognitions, the Mini-IPIP scales of Big Five personality traits, and the Social Desirability Scale.\n\n\nRESULTS\nGratitude predicted greater subjective sleep quality and sleep duration, and less sleep latency and daytime dysfunction. The relationship between gratitude and each of the sleep variables was mediated by more positive pre-sleep cognitions and less negative pre-sleep cognitions. All of the results were independent of the effect of the Big Five personality traits (including neuroticism) and social desirability.\n\n\nCONCLUSION\nThis is the first study to show that a positive trait is related to good sleep quality above the effect of other personality traits, and to test whether pre-sleep cognitions are the mechanism underlying the relationship between any personality trait and sleep. The study is also the first to show that trait gratitude is related to sleep and to explain why this occurs, suggesting future directions for research, and novel clinical implications.",
"title": ""
},
{
"docid": "1d3192e66e042e67dabeae96ca345def",
"text": "Privacy-enhancing technologies (PETs), which constitute a wide array of technical means for protecting users’ privacy, have gained considerable momentum in both academia and industry. However, existing surveys of PETs fail to delineate what sorts of privacy the described technologies enhance, which makes it difficult to differentiate between the various PETs. Moreover, those surveys could not consider very recent important developments with regard to PET solutions. The goal of this chapter is two-fold. First, we provide an analytical framework to differentiate various PETs. This analytical framework consists of high-level privacy principles and concrete privacy concerns. Secondly, we use this framework to evaluate representative up-to-date PETs, specifically with regard to the privacy concerns they address, and how they address them (i.e., what privacy principles they follow). Based on findings of the evaluation, we outline several future research directions.",
"title": ""
},
{
"docid": "f6388d37976740ebb789e7d5f6c072f1",
"text": "With the advent of image and video representation of visual scenes in digital computer, subsequent necessity of vision-substitution representation of a given image is felt. The medium for non-visual representation of an image is chosen to be sound due to well developed auditory sensing ability of human beings and wide availability of cheap audio hardware. Visionary information of an image can be conveyed to blind and partially sighted persons through auditory representation of the image within some of the known limitations of human hearing system. The research regarding image sonification has mostly evolved through last three decades. The paper also discusses in brief about the reverse mapping, termed as sound visualization. This survey approaches to summarize the methodologies and issues of the implemented and unimplemented experimental systems developed for subjective sonification of image scenes and let researchers accumulate knowledge about the previous direction of researches in this domain.",
"title": ""
},
{
"docid": "adc03d95eea19cede1ea91aae733943b",
"text": "In this paper, we discuss the emerging application of device-free localization (DFL) using wireless sensor networks, which find people and objects in the environment in which the network is deployed, even in buildings and through walls. These networks are termed “RF sensor networks” because the wireless network itself is the sensor, using radio-frequency (RF) signals to probe the deployment area. DFL in cluttered multipath environments has been shown to be feasible, and in fact benefits from rich multipath channels. We describe modalities of measurements made by RF sensors, the statistical models which relate a person's position to channel measurements, and describe research progress in this area.",
"title": ""
},
{
"docid": "45043fe3e4aa28daddea21c6546e7640",
"text": "The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix-<inline-formula><tex-math notation=\"LaTeX\">$4$ </tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq1-2493547.gif\"/></alternatives></inline-formula> (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- <inline-formula><tex-math notation=\"LaTeX\">$8$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq2-2493547.gif\"/></alternatives></inline-formula> Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq3-2493547.gif\"/></alternatives></inline-formula>-bit adder is deliberately designed for calculating the sum of <inline-formula><tex-math notation=\"LaTeX\">$1\\times$</tex-math> <alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq4-2493547.gif\"/></alternatives></inline-formula> and <inline-formula> <tex-math notation=\"LaTeX\">$2\\times$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq5-2493547.gif\"/> </alternatives></inline-formula> of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the <inline-formula><tex-math notation=\"LaTeX\">$2$</tex-math><alternatives> <inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq6-2493547.gif\"/></alternatives></inline-formula>-bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed <inline-formula> <tex-math notation=\"LaTeX\">$16\\times 16$</tex-math><alternatives><inline-graphic xlink:type=\"simple\" xlink:href=\"han-ieq7-2493547.gif\"/> </alternatives></inline-formula> bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.",
"title": ""
},
{
"docid": "30dfcf624badf766c3c7070548a47af4",
"text": "The primary purpose of this paper is to stimulate discussion about a research agenda for a new interdisciplinary field. This field-the study of coordination-draws upon a variety of different disciplines including computer science, organization theory, management science, economics, and psychology. Work in this new area will include developing a body of scientific theory, which we will call \"coordination theory,\" about how the activities of separate actors can be coordinated. One important use for coordination theory will be in developing and using computer and communication systems to help people coordinate their activities in new ways. We will call these systems \"coordination technology.\" Rationale There are four reasons why work in this area is timely: (1) In recent years, large numbers of people have acquired direct access to computers. These computers are now beginning to be connected to each other. Therefore, we now have, for the first time, an opportunity for vastly larger numbers of people to use computing and communications capabilities to help coordinate their work. For example, specialized new software has been developed to (a) support multiple authors working together on the same document, (b) help people display and manipulate information more effectively in face-to-face meetings, and (c) help people intelligently route and process electronic messages. It already appears likely that there will be commercially successful products of this new type (often called \"computer supported cooperative work\" or \"groupware\"), and to some observers these applications herald a paradigm shift in computer usage as significant as the earlier shifts to time-sharing and personal computing. It is less clear whether the continuing development of new computer applications in this area will depend solely on the intuitions of successful designers or whether it will also be guided by a coherent underlying theory of how people coordinate their activities now and how they might do so differently with computer support. (2) In the long run, the dramatic improvements in the costs and capabilities of information technologies are changing-by orders of magnitude-the constraints on how certain kinds of communication and coordination can occur. At the same time, there is a pervasive feeling in American business that the pace of change is accelerating and that we need to create more flexible and adaptive organizations. Together, these changes may soon lead us across a threshhold where entirely new ways of organizing human activities become desirable. For 2 example, new capabilities for communicating information faster, less expensively, and …",
"title": ""
},
{
"docid": "c0650814388c7e1de19ee6e668d40e69",
"text": "In this paper we consider persuasion in the context of practical reasoning, and discuss the problems associated with construing reasoning about actions in a manner similar to reasoning about beliefs. We propose a perspective on practical reasoning as presumptive justification of a course of action, along with critical questions of this justification, building on the account of Walton. From this perspective, we articulate an interaction protocol, which we call PARMA, for dialogues over proposed actions based on this theory. We outline an axiomatic semantics for the PARMA Protocol, and discuss two implementations which use this protocol to mediate a discussion between humans. We then show how our proposal can be made computational within the framework of agents based on the Belief-Desire-Intention model, and illustrate this proposal with an example debate within a multi agent system.",
"title": ""
},
{
"docid": "886c284d72a01db9bc4eb9467e14bbbb",
"text": "The Bitcoin cryptocurrency introduced a novel distributed consensus mechanism relying on economic incentives. While a coalition controlling a majority of computational power may undermine the system, for example by double-spending funds, it is often assumed it would be incentivized not to attack to protect its long-term stake in the health of the currency. We show how an attacker might purchase mining power (perhaps at a cost premium) for a short duration via bribery. Indeed, bribery can even be performed in-band with the system itself enforcing the bribe. A bribing attacker would not have the same concerns about the long-term health of the system, as their majority control is inherently short-lived. New modeling assumptions are needed to explain why such attacks have not been observed in practice. The need for all miners to avoid short-term profits by accepting bribes further suggests a potential tragedy of the commons which has not yet been analyzed.",
"title": ""
},
{
"docid": "1c4e71d00521219717607cbef90b5bec",
"text": "The design of security for cyber-physical systems must take into account several characteristics common to such systems. Among these are feedback between the cyber and physical environment, distributed management and control, uncertainty, real-time requirements, and geographic distribution. This paper discusses these characteristics and suggests a design approach that better integrates security into the core design of the system. A research roadmap is presented that highlights some of the missing pieces needed to enable such an approach. 1. What is a Cyber-Physical-System? The term cyber-physical system has been applied to many problems, ranging from robotics, through SCADA, and distributed control systems. Not all cyber-physical systems involve critical infrastructure, but there are common elements that change the nature of the solutions that must be considered when securing cyber-physical systems. First, the extremely critical nature of activities performed by some cyber-physical systems means that we need security that works, and that by itself means we need something different. All kidding aside, there are fundamental system differences in cyber-physical systems that will force us to look at security in ways more closely tied to the physical application. It is my position that by focusing on these differences we can see where new (or rediscovered) approaches are needed, and that by building systems that support the inclusion of security as part of the application architecture, we can improve the security of both cyber-physical systems, where such an approach is most clearly warranted, as well as improve the security of cyber-only systems, where such an approach is more easily ignored. In this position paper I explain the characteristics of cyber-physical systems that must drive new research in security. I discuss the security problem areas that need attention because of these characteristics and I describe a design methodology for security that provides for better integration of security design with application design. Finally, I suggest some of the components of future systems that can help us include security as a focusing issue in the architectural design of critical applications.",
"title": ""
},
{
"docid": "c3f4f7d75c1b5cfd713ad7a10c887a3a",
"text": "This paper presents an open-source diarization toolkit which is mostly dedicated to speaker and developed by the LIUM. This toolkit includes hierarchical agglomerative clustering methods using well-known measures such as BIC and CLR. Two applications for which the toolkit has been used are presented: one is for broadcast news using the ESTER 2 data and the other is for telephone conversations using the MEDIA corpus.",
"title": ""
},
{
"docid": "d161ab557edb4268a0ebc606bb9dbcb6",
"text": "Recommender systems play an important role in reducing the negative impact of information overload on those websites where users have the possibility of voting for their preferences on Ítems. The most normal technique for dealing with the recommendation mechanism is to use collaborative filtering, in which it is essential to discover the most similar users to whom you desire to make recommendations. The hypothesis of this paper is that the results obtained by applying traditional similarities measures can be improved by taking contextual information, drawn from the entire body of users, and using it to calcúlate the singularity which exists, for each item, in the votes cast by each pair of users that you wish to compare. As such, the greater the measure of singularity result between the votes cast by two given users, the greater the impact this will have on the similarity. The results, tested on the Movielens, Netflix and FilmAffinity databases, corrobórate the excellent behaviour of the singularity measure proposed.",
"title": ""
},
{
"docid": "a93bf6b8408bf0adba4985e7bd571d29",
"text": "The modern data compression is mainly based on two approaches to entropy coding: Huffman (HC) and arithmetic/range coding (AC). The former is much faster, but approximates probabilities with powers of 2, usually leading to relatively low compression rates. The latter uses nearly exact probabilities easily approaching theoretical compression rate limit (Shannon entropy), but at cost of much larger computational cost. Asymmetric numeral systems (ANS) is a new approach to accurate entropy coding, which allows to end this tradeoff between speed and rate: the recent implementation [1] provides about 50% faster decoding than HC for 256 size alphabet, with compression rate similar to provided by AC. This advantage is due to being simpler than AC: using single natural number as the state, instead of two to represent a range. Beside simplifying renormalization, it allows to put the entire behavior for given probability distribution into a relatively small table: defining entropy coding automaton. The memory cost of such table for 256 size alphabet is a few kilobytes. There is a large freedom while choosing a specific table using pseudorandom number generator initialized with cryptographic key for this purpose allows to simultaneously encrypt the data. This article also introduces and discusses many other variants of this new entropy coding approach, which can provide direct alternatives for standard AC, for large alphabet range coding, or for approximated quasi arithmetic coding.",
"title": ""
}
] | scidocsrr |
66932f4285195f1694e5835e5f716cf9 | BUP: A Bottom-Up parser embedded in Prolog | [
{
"docid": "0b18f7966a57e266487023d3a2f3549d",
"text": "A clear andpowerfulformalism for describing languages, both natural and artificial, follows f iom a method for expressing grammars in logic due to Colmerauer and Kowalski. This formalism, which is a natural extension o f context-free grammars, we call \"definite clause grammars\" (DCGs). A DCG provides not only a description of a language, but also an effective means for analysing strings o f that language, since the DCG, as it stands, is an executable program o f the programming language Prolog. Using a standard Prolog compiler, the DCG can be compiled into efficient code, making it feasible to implement practical language analysers directly as DCGs. This paper compares DCGs with the successful and widely used augmented transition network (ATN) formalism, and indicates how ATNs can be translated into DCGs. It is argued that DCGs can be at least as efficient as ATNs, whilst the DCG formalism is clearer, more concise and in practice more powerful",
"title": ""
}
] | [
{
"docid": "c1a4da111d6e3496845b4726dfabcb5b",
"text": "A growing number of information technology systems and services are being developed to change users’ attitudes or behavior or both. Despite the fact that attitudinal theories from social psychology have been quite extensively applied to the study of user intentions and behavior, these theories have been developed for predicting user acceptance of the information technology rather than for providing systematic analysis and design methods for developing persuasive software solutions. This article is conceptual and theory-creating by its nature, suggesting a framework for Persuasive Systems Design (PSD). It discusses the process of designing and evaluating persuasive systems and describes what kind of content and software functionality may be found in the final product. It also highlights seven underlying postulates behind persuasive systems and ways to analyze the persuasion context (the intent, the event, and the strategy). The article further lists 28 design principles for persuasive system content and functionality, describing example software requirements and implementations. Some of the design principles are novel. Moreover, a new categorization of these principles is proposed, consisting of the primary task, dialogue, system credibility, and social support categories.",
"title": ""
},
{
"docid": "e42dece8d8870739249d19a5d84c6a79",
"text": "In this paper, we propose a method for extracting travelrelated event information, such as an event name or a schedule from automatically identified newspaper articles, in which particular events are mentioned. We analyze news corpora using our method, extracting venue names from them. We then find web pages that refer to event schedules for these venues. To confirm the effectiveness of our method, we conducted several experiments. From the experimental results, we obtained a precision of 91.5% and a recall of 75.9% for the automatic extraction of event information from news articles, and a precision of 90.8% and a recall of 52.8% for the automatic identification of eventrelated web pages.",
"title": ""
},
{
"docid": "56c0ce72f6672c6d0f6e37ddd019dd2a",
"text": "We focus on the task of multi-hop reading comprehension where a system is required to reason over a chain of multiple facts, distributed across multiple passages, to answer a question. Inspired by graph-based reasoning, we present a path-based reasoning approach for textual reading comprehension. It operates by generating potential paths across multiple passages, extracting implicit relations along this path, and composing them to encode each path. The proposed model achieves a 2.3% gain on the WikiHop Dev set as compared to previous state-of-the-art and, as a side-effect, is also able to explain its reasoning through explicit paths of sentences.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "029cca0b7e62f9b52e3d35422c11cea4",
"text": "This letter presents the design of a novel wideband horizontally polarized omnidirectional printed loop antenna. The proposed antenna consists of a loop with periodical capacitive loading and a parallel stripline as an impedance transformer. Periodical capacitive loading is realized by adding interlaced coupling lines at the end of each section. Similarly to mu-zero resonance (MZR) antennas, the periodical capacitive loaded loop antenna proposed in this letter allows current along the loop to remain in phase and uniform. Therefore, it can achieve a horizontally polarized omnidirectional pattern in the far field, like a magnetic dipole antenna, even though the perimeter of the loop is comparable to the operating wavelength. Furthermore, the periodical capacitive loading is also useful to achieve a wide impedance bandwidth. A prototype of the proposed periodical capacitive loaded loop antenna is fabricated and measured. It can provide a wide impedance bandwidth of about 800 MHz (2170-2970 MHz, 31.2%) and a horizontally polarized omnidirectional pattern in the azimuth plane.",
"title": ""
},
{
"docid": "5b579b0b46f94ecb3842dd5ca3130fd4",
"text": "To assure high quality of database applications, testing database applications remains the most popularly used approach. In testing database applications, tests consist of both program inputs and database states. Assessing the adequacy of tests allows targeted generation of new tests for improving their adequacy (e.g., fault-detection capabilities). Comparing to code coverage criteria, mutation testing has been a stronger criterion for assessing the adequacy of tests. Mutation testing would produce a set of mutants (each being the software under test systematically seeded with a small fault) and then measure how high percentage of these mutants are killed (i.e., detected) by the tests under assessment. However, existing test-generation approaches for database applications do not provide sufficient support for killing mutants in database applications (in either program code or its embedded or resulted SQL queries). To address such issues, in this paper, we propose an approach called MutaGen that conducts test generation for mutation testing on database applications. In our approach, we first apply an existing approach that correlates various constraints within a database application through constructing synthesized database interactions and transforming the constraints from SQL queries into normal program code. Based on the transformed code, we generate program-code mutants and SQL-query mutants, and then derive and incorporate query-mutant-killing constraints into the transformed code. Then, we generate tests to satisfy query-mutant-killing constraints. Evaluation results show that MutaGen can effectively kill mutants in database applications, and MutaGen outperforms existing test-generation approaches for database applications in terms of strong mutant killing.",
"title": ""
},
{
"docid": "d69b8c991e66ff274af63198dba2ee01",
"text": "Nowadays, there are two significant tendencies, how to process the enormous amount of data, big data, and how to deal with the green issues related to sustainability and environmental concerns. An interesting question is whether there are inherent correlations between the two tendencies in general. To answer this question, this paper firstly makes a comprehensive literature survey on how to green big data systems in terms of the whole life cycle of big data processing, and then this paper studies the relevance between big data and green metrics and proposes two new metrics, effective energy efficiency and effective resource efficiency in order to bring new views and potentials of green metrics for the future times of big data.",
"title": ""
},
{
"docid": "28facedbdc268f253ab8ace98f0902b2",
"text": "OBJECTIVE\nA wide spectrum of space-occupying soft-tissue lesions may be discovered on MRI studies, either as incidental findings or as palpable or symptomatic masses. Characterization of a lesion as benign or indeterminate is the most important step toward optimal treatment and avoidance of unnecessary biopsy or surgical intervention.\n\n\nCONCLUSION\nThe systemic MRI interpretation approach presented in this article enables the identification of cases in which sarcoma can be excluded.",
"title": ""
},
{
"docid": "a3e88345a2bcd07bf756ca02968082f6",
"text": "Bi-directional LSTMs have emerged as a standard method for obtaining per-token vector representations serving as input to various token labeling tasks (whether followed by Viterbi prediction or independent classification). This paper proposes an alternative to Bi-LSTMs for this purpose: iterated dilated convolutional neural networks (ID-CNNs), which have better capacity than traditional CNNs for large context and structured prediction. We describe a distinct combination of network structure, parameter sharing and training procedures that is not only more accurate than Bi-LSTM-CRFs, but also 8x faster at test time on long sequences. Moreover, ID-CNNs with independent classification enable a dramatic 14x testtime speedup, while still attaining accuracy comparable to the Bi-LSTM-CRF. We further demonstrate the ability of IDCNNs to combine evidence over long sequences by demonstrating their improved accuracy on whole-document (rather than per-sentence) inference. Unlike LSTMs whose sequential processing on sentences of length N requires O(N) time even in the face of parallelism, IDCNNs permit fixed-depth convolutions to run in parallel across entire documents. Today when many companies run basic NLP on the entire web and large-volume traffic, faster methods are paramount to saving time and energy costs.",
"title": ""
},
{
"docid": "dea4d96b7af9f3a2c6acb7ae38947954",
"text": "The state-of-the-art object detection networks for natural images have recently demonstrated impressive performances. However the complexity of ship detection in high resolution satellite images exposes the limited capacity of these networks for strip-like rotated assembled object detection which are common in remote sensing images. In this paper, we embrace this observation and introduce the rotated region based CNN (RR-CNN), which can learn and accurately extract features of rotated regions and locate rotated objects precisely. RR-CNN has three important new components including a rotated region of interest (RRoI) pooling layer, a rotated bounding box regression model and a multi-task method for non-maximal suppression (NMS) between different classes. Experimental results on the public ship dataset HRSC2016 confirm that RR-CNN outperforms baselines by a large margin.",
"title": ""
},
{
"docid": "024b739dc047e17310fe181591fcd335",
"text": "In this paper, a Ka-Band patch sub-array structure for millimeter-wave phased array applications is demonstrated. The conventional corner truncated patch is modified to improve the impedance and CP bandwidth alignment. A new sub-array feed approach is introduced to reduce complexity of the feed line between elements and increase the radiation efficiency. A sub-array prototype is built and tested. Good agreement with the theoretical results is obtained.",
"title": ""
},
{
"docid": "43398874a34c7346f41ca7a18261e878",
"text": "This article investigates transitions at the level of societal functions (e.g., transport, communication, housing). Societal functions are fulfilled by sociotechnical systems, which consist of a cluster of aligned elements, e.g., artifacts, knowledge, markets, regulation, cultural meaning, infrastructure, maintenance networks and supply networks. Transitions are conceptualised as system innovations, i.e., a change from one sociotechnical system to another. The article describes a co-evolutionary multi-level perspective to understand how system innovations come about through the interplay between technology and society. The article makes a new step as it further refines the multi-level perspective by distinguishing characteristic patterns: (a) two transition routes, (b) fit–stretch pattern, and (c) patterns in breakthrough. D 2005 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9b8072d38753fc64199693a44297a135",
"text": "We propose a segmentation algorithm for the purposes of large-scale flower species recognition. Our approach is based on identifying potential object regions at the time of detection. We then apply a Laplacian-based segmentation, which is guided by these initially detected regions. More specifically, we show that 1) recognizing parts of the potential object helps the segmentation and makes it more robust to variabilities in both the background and the object appearances, 2) segmenting the object of interest at test time is beneficial for the subsequent recognition. Here we consider a large-scale dataset containing 578 flower species and 250,000 images. This dataset is developed by our team for the purposes of providing a flower recognition application for general use and is the largest in its scale and scope. We tested the proposed segmentation algorithm on the well-known 102 Oxford flowers benchmark [11] and on the new challenging large-scale 578 flower dataset, that we have collected. We observed about 4% improvements in the recognition performance on both datasets compared to the baseline. The algorithm also improves all other known results on the Oxford 102 flower benchmark dataset. Furthermore, our method is both simpler and faster than other related approaches, e.g. [3, 14], and can be potentially applicable to other subcategory recognition datasets.",
"title": ""
},
{
"docid": "43bb109c93d7f259b11c42031cd93ad6",
"text": "A compact rectangular slotted monopole antenna for ultra wideband (UWB) application is presented. The designed antenna has a simple structure and compact size of 25 × 26 mm2. This antenna consist of radiating patch with two steps and one slot introduced on it for bandwidth enhancement and a ground plane. Antenna is feed with 50Ω microstrip line. IE3D method of moments based simulation software is used for design and FR4 substrate of dielectric constant value 4.4 with loss tangent 0.02.",
"title": ""
},
{
"docid": "c81e728d9d4c2f636f067f89cc14862c",
"text": "2",
"title": ""
},
{
"docid": "77273b82e31c0b0c361525f83814dd40",
"text": "For a multiuser data communications system operating over a mutually cross-coupled linear channel with additive noise sources, we determine the following: (1) a linear cross-coupled receiver processor (filter) that yields the least-mean-squared error between the desired outputs and the actual outputs, and (2) a cross-coupled transmitting filter that optimally distributes the total available power among the different users, as well as the total available frequency spectrum. The structure of the optimizing filters is similar to the known 2 × 2 case encountered in problems associated with digital transmission over dually polarized radio channels.",
"title": ""
},
{
"docid": "ac41c57bcb533ab5dabcc733dd69a705",
"text": "In this paper we propose two ways to deal with the imbalanced data classification problem using random forest. One is based on cost sensitive learning, and the other is based on a sampling technique. Performance metrics such as precision and recall, false positive rate and false negative rate, F-measure and weighted accuracy are computed. Both methods are shown to improve the prediction accuracy of the minority class, and have favorable performance compared to the existing algorithms.",
"title": ""
},
{
"docid": "c784bfbd522bb4c9908c3f90a31199fe",
"text": "Vedolizumab (VDZ) inhibits α4β7 integrins and is used to target intestinal immune responses in patients with inflammatory bowel disease, which is considered to be relatively safe. Here we report on a fatal complication following VDZ administration. A 64-year-old female patient with ulcerative colitis (UC) refractory to tumor necrosis factor inhibitors was treated with VDZ. One week after the second VDZ infusion, she was admitted to hospital with severe diarrhea and systemic inflammatory response syndrome (SIRS). Blood stream infections were ruled out, and endoscopy revealed extensive ulcerations of the small intestine covered with pseudomembranes, reminiscent of invasive candidiasis or mesenteric ischemia. Histology confirmed subtotal destruction of small intestinal epithelia and colonization with Candida. Moreover, small mesenteric vessels were occluded by hyaline thrombi, likely as a result of SIRS, while perfusion of large mesenteric vessels was not compromised. Beta-D-glucan concentrations were highly elevated, and antimycotic therapy was initiated for suspected invasive candidiasis but did not result in any clinical benefit. Given the non-responsiveness to anti-infective therapies, an autoimmune phenomenon was suspected and immunosuppressive therapy was escalated. However, the patient eventually died from multi-organ failure. This case should raise the awareness for rare but severe complications related to immunosuppressive therapy, particularly in high risk patients.",
"title": ""
},
{
"docid": "85d8c2190b2b999df30ee92244236805",
"text": "Single document summarization is the task of producing a shorter version of a document while preserving its principal information content. In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective. We use our algorithm to train a neural summarization model on the CNN and DailyMail datasets and demonstrate experimentally that it outperforms state-of-the-art extractive and abstractive systems when evaluated automatically and by humans.1",
"title": ""
},
{
"docid": "937d93600ad3d19afda31ada11ea1460",
"text": "Several new services incentivize clients to compete in solving large computation tasks in exchange for financial rewards. This model of competitive distributed computation enables every user connected to the Internet to participate in a game in which he splits his computational power among a set of competing pools -- the game is called a computational power splitting game. We formally model this game and show its utility in analyzing the security of pool protocols that dictate how financial rewards are shared among the members of a pool. As a case study, we analyze the Bitcoin crypto currency which attracts computing power roughly equivalent to billions of desktop machines, over 70% of which is organized into public pools. We show that existing pool reward sharing protocols are insecure in our game-theoretic analysis under an attack strategy called the \"block withholding attack\". This attack is a topic of debate, initially thought to be ill-incentivized in today's pool protocols: i.e., causing a net loss to the attacker, and later argued to be always profitable. Our analysis shows that the attack is always well-incentivized in the long-run, but may not be so for a short duration. This implies that existing pool protocols are insecure, and if the attack is conducted systematically, Bitcoin pools could lose millions of dollars worth in months. The equilibrium state is a mixed strategy -- that is -- in equilibrium all clients are incentivized to probabilistically attack to maximize their payoffs rather than participate honestly. As a result, the Bitcoin network is incentivized to waste a part of its resources simply to compete.",
"title": ""
}
] | scidocsrr |
8ea6c2e2d82663cb0a47e7863d07b2ae | Projective Feature Learning for 3D Shapes with Multi-View Depth Images | [
{
"docid": "0964d1cc6584f2e20496c2f02952ba46",
"text": "This paper proposes to learn a set of high-level feature representations through deep learning, referred to as Deep hidden IDentity features (DeepID), for face verification. We argue that DeepID can be effectively learned through challenging multi-class face identification tasks, whilst they can be generalized to other tasks (such as verification) and new identities unseen in the training set. Moreover, the generalization capability of DeepID increases as more face classes are to be predicted at training. DeepID features are taken from the last hidden layer neuron activations of deep convolutional networks (ConvNets). When learned as classifiers to recognize about 10, 000 face identities in the training set and configured to keep reducing the neuron numbers along the feature extraction hierarchy, these deep ConvNets gradually form compact identity-related features in the top layers with only a small number of hidden neurons. The proposed features are extracted from various face regions to form complementary and over-complete representations. Any state-of-the-art classifiers can be learned based on these high-level representations for face verification. 97:45% verification accuracy on LFW is achieved with only weakly aligned faces.",
"title": ""
}
] | [
{
"docid": "614174e5e1dffe9824d7ef8fae6fb499",
"text": "This paper starts with presenting a fundamental principle based on which the celebrated orthogonal frequency division multiplexing (OFDM) waveform is constructed. It then extends the same principle to construct the newly introduced generalized frequency division multiplexing (GFDM) signals. This novel derivation sheds light on some interesting properties of GFDM. In particular, our derivation seamlessly leads to an implementation of GFDM transmitter which has significantly lower complexity than what has been reported so far. Our derivation also facilitates a trivial understanding of how GFDM (similar to OFDM) can be applied in MIMO channels.",
"title": ""
},
{
"docid": "0f5caf6bb5e0fdb99fba592fd34f1a8b",
"text": "Lawrence Kohlberg (1958) agreed with Piaget's (1932) theory of moral development in principle but wanted to develop his ideas further. He used Piaget’s storytelling technique to tell people stories involving moral dilemmas. In each case, he presented a choice to be considered, for example, between the rights of some authority and the needs of some deserving individual who is being unfairly treated. One of the best known of Kohlberg’s (1958) stories concerns a man called Heinz who lived somewhere in Europe. Heinz’s wife was dying from a particular type of cancer. Doctors said a new drug might save her. The drug had been discovered by a local chemist, and the Heinz tried desperately to buy some, but the chemist was charging ten times the money it cost to make the drug, and this was much more than the Heinz could afford. Heinz could only raise half the money, even after help from family and friends. He explained to the chemist that his wife was dying and asked if he could have the drug cheaper or pay the rest of the money later. The chemist refused, saying that he had discovered the drug and was going to make money from it. The husband was desperate to save his wife, so later that night he broke into the chemist’s and stole the drug.",
"title": ""
},
{
"docid": "61980865ef90d0236af464caf2005024",
"text": "Driver fatigue has become one of the major causes of traffic accidents, and is a complicated physiological process. However, there is no effective method to detect driving fatigue. Electroencephalography (EEG) signals are complex, unstable, and non-linear; non-linear analysis methods, such as entropy, maybe more appropriate. This study evaluates a combined entropy-based processing method of EEG data to detect driver fatigue. In this paper, 12 subjects were selected to take part in an experiment, obeying driving training in a virtual environment under the instruction of the operator. Four types of enthrones (spectrum entropy, approximate entropy, sample entropy and fuzzy entropy) were used to extract features for the purpose of driver fatigue detection. Electrode selection process and a support vector machine (SVM) classification algorithm were also proposed. The average recognition accuracy was 98.75%. Retrospective analysis of the EEG showed that the extracted features from electrodes T5, TP7, TP8 and FP1 may yield better performance. SVM classification algorithm using radial basis function as kernel function obtained better results. A combined entropy-based method demonstrates good classification performance for studying driver fatigue detection.",
"title": ""
},
{
"docid": "c4fef61aa26aa1d3ef693845b2ff3ee0",
"text": "According to AV vendors malicious software has been growing exponentially last years. One of the main reasons for these high volumes is that in order to evade detection, malware authors started using polymorphic and metamorphic techniques. As a result, traditional signature-based approaches to detect malware are being insufficient against new malware and the categorization of malware samples had become essential to know the basis of the behavior of malware and to fight back cybercriminals. During the last decade, solutions that fight against malicious software had begun using machine learning approaches. Unfortunately, there are few opensource datasets available for the academic community. One of the biggest datasets available was released last year in a competition hosted on Kaggle with data provided by Microsoft for the Big Data Innovators Gathering (BIG 2015). This thesis presents two novel and scalable approaches using Convolutional Neural Networks (CNNs) to assign malware to its corresponding family. On one hand, the first approach makes use of CNNs to learn a feature hierarchy to discriminate among samples of malware represented as gray-scale images. On the other hand, the second approach uses the CNN architecture introduced by Yoon Kim [12] to classify malware samples according their x86 instructions. The proposed methods achieved an improvement of 93.86% and 98,56% with respect to the equal probability benchmark.",
"title": ""
},
{
"docid": "dfc9099b1b31d5f214b341c65fbb8e92",
"text": "In this communication, a dual-feed dual-polarized microstrip antenna with low cross polarization and high isolation is experimentally studied. Two different feed mechanisms are designed to excite a dual orthogonal linearly polarized mode from a single radiating patch. One of the two modes is excited by an aperture-coupled feed, which comprises a compact resonant annular-ring slot and a T-shaped microstrip feedline; while the other is excited by a pair of meandering strips with a 180$^{\\circ}$ phase differences. Both linearly polarized modes are designed to operate at 2400-MHz frequency band, and from the measured results, it is found that the isolation between the two feeding ports is less than 40 dB across a 10-dB input-impedance bandwidth of 14%. In addition, low cross polarization is observed from the radiation patterns of the two modes, especially at the broadside direction. Simulation analyses are also carried out to support the measured results.",
"title": ""
},
{
"docid": "5e43dd30c8cf58fe1b79686b33a015b9",
"text": "We review Boltzmann machines extended for time-series. These models often have recurrent structure, and back propagration through time (BPTT) is used to learn their parameters. The perstep computational complexity of BPTT in online learning, however, grows linearly with respect to the length of preceding time-series (i.e., learning rule is not local in time), which limits the applicability of BPTT in online learning. We then review dynamic Boltzmann machines (DyBMs), whose learning rule is local in time. DyBM’s learning rule relates to spike-timing dependent plasticity (STDP), which has been postulated and experimentally confirmed for biological neural networks.",
"title": ""
},
{
"docid": "040f73fc915d3799193abf5e3a48e8f4",
"text": "BACKGROUND\nDiphallia is a very rare anomaly and seen once in every 5.5 million live births. True diphallia with normal penile structures is extremely rare. Surgical management for patients with complete penile duplication without any penile or urethral pathology is challenging.\n\n\nCASE REPORT\nA 4-year-old boy presented with diphallia. Initial physical examination revealed first physical examination revealed complete penile duplication, urine flow from both penises, meconium flow from right urethra, and anal atresia. Further evaluations showed double colon and rectum, double bladder, and large recto-vesical fistula. Two cavernous bodies and one spongious body were detected in each penile body. Surgical treatment plan consisted of right total penectomy and end-to-side urethra-urethrostomy. No postoperative complications and no voiding dysfunction were detected during the 18 months follow-up.\n\n\nCONCLUSION\nPenile duplication is a rare anomaly, which presents differently in each patient. Because of this, the treatment should be individualized and end-to-side urethra-urethrostomy may be an alternative to removing posterior urethra. This approach eliminates the risk of damaging prostate gland and sphincter.",
"title": ""
},
{
"docid": "48c4b2a708f2607a8d66b642e917433d",
"text": "In this paper we present an approach to control a real car with brain signals. To achieve this, we use a brain computer interface (BCI) which is connected to our autonomous car. The car is equipped with a variety of sensors and can be controlled by a computer. We implemented two scenarios to test the usability of the BCI for controlling our car. In the first scenario our car is completely brain controlled, using four different brain patterns for steering and throttle/brake. We will describe the control interface which is necessary for a smooth, brain controlled driving. In a second scenario, decisions for path selection at intersections and forkings are made using the BCI. Between these points, the remaining autonomous functions (e.g. path following and obstacle avoidance) are still active. We evaluated our approach in a variety of experiments on a closed airfield and will present results on accuracy, reaction times and usability.",
"title": ""
},
{
"docid": "b4cadd9179150203638ff9b045a4145d",
"text": "Interpenetrating network (IPN) hydrogel membranes of sodium alginate (SA) and poly(vinyl alcohol) (PVA) were prepared by solvent casting method for transdermal delivery of an anti-hypertensive drug, prazosin hydrochloride. The prepared membranes were thin, flexible and smooth. The X-ray diffraction studies indicated the amorphous dispersion of drug in the membranes. Differential scanning calorimetric analysis confirmed the IPN formation and suggests that the membrane stiffness increases with increased concentration of glutaraldehyde (GA) in the membranes. All the membranes were permeable to water vapors depending upon the extent of cross-linking. The in vitro drug release study was performed through excised rat abdominal skin; drug release depends on the concentrations of GA in membranes. The IPN membranes extended drug release up to 24 h, while SA and PVA membranes discharged the drug quickly. The primary skin irritation and skin histopathology study indicated that the prepared IPN membranes were less irritant and safe for skin application.",
"title": ""
},
{
"docid": "b123916f2795ab6810a773ac69bdf00b",
"text": "The acceptance of open data practices by individuals and organizations lead to an enormous explosion in data production on the Internet. The access to a large number of these data is carried out through Web services, which provide a standard way to interact with data. This class of services is known as data services. In this context, users' queries often require the composition of multiple data services to be answered. On the other hand, the data returned by a data service is not always certain due to various raisons, e.g., the service accesses different data sources, privacy constraints, etc. In this paper, we study the basic activities of data services that are affected by the uncertainty of data, more specifically, modeling, invocation and composition. We propose a possibilistic approach that treats the uncertainty in all these activities.",
"title": ""
},
{
"docid": "8fdfebc612ff46103281fcdd7c9d28c8",
"text": "We develop a shortest augmenting path algorithm for the linear assignment problem. It contains new initialization routines and a special implementation of Dijkstra's shortest path method. For both dense and sparse problems computational experiments show this algorithm to be uniformly faster than the best algorithms from the literature. A Pascal implementation is presented. Wir entwickeln einen Algorithmus mit kürzesten alternierenden Wegen für das lineare Zuordnungsproblem. Er enthält neue Routinen für die Anfangswerte und eine spezielle Implementierung der Kürzesten-Wege-Methode von Dijkstra. Sowohl für dichte als auch für dünne Probleme zeigen Testläufe, daß unser Algorithmus gleichmäßig schneller als die besten Algorithmen aus der Literatur ist. Eine Implementierung in Pascal wird angegeben.",
"title": ""
},
{
"docid": "eb9b4bea2d1a6230f8fb9e742bb7bc23",
"text": "Increasing the size of a neural network typically improves accuracy but also increases the memory and compute requirements for training the model. We introduce methodology for training deep neural networks using half-precision floating point numbers, without losing model accuracy or having to modify hyperparameters. This nearly halves memory requirements and, on recent GPUs, speeds up arithmetic. Weights, activations, and gradients are stored in IEEE halfprecision format. Since this format has a narrower range than single-precision we propose three techniques for preventing the loss of critical information. Firstly, we recommend maintaining a single-precision copy of weights that accumulates the gradients after each optimizer step (this copy is rounded to half-precision for the forwardand back-propagation). Secondly, we propose loss-scaling to preserve gradient values with small magnitudes. Thirdly, we use half-precision arithmetic that accumulates into single-precision outputs, which are converted to halfprecision before storing to memory. We demonstrate that the proposed methodology works across a wide variety of tasks and modern large scale (exceeding 100 million parameters) model architectures, trained on large datasets.",
"title": ""
},
{
"docid": "9c2e89bad3ca7b7416042f95bf4f4396",
"text": "We present a simple and computationally efficient algorithm for approximating Catmull-Clark subdivision surfaces using a minimal set of bicubic patches. For each quadrilateral face of the control mesh, we construct a geometry patch and a pair of tangent patches. The geometry patches approximate the shape and silhouette of the Catmull-Clark surface and are smooth everywhere except along patch edges containing an extraordinary vertex where the patches are C0. To make the patch surface appear smooth, we provide a pair of tangent patches that approximate the tangent fields of the Catmull-Clark surface. These tangent patches are used to construct a continuous normal field (through their cross-product) for shading and displacement mapping. Using this bifurcated representation, we are able to define an accurate proxy for Catmull-Clark surfaces that is efficient to evaluate on next-generation GPU architectures that expose a programmable tessellation unit.",
"title": ""
},
{
"docid": "3fa5de33e7ccd6c440a4a65a5681f8b8",
"text": "Argumentation is the process by which arguments are constructed and handled. Argumentation constitutes a major component of human intelligence. The ability to engage in argumentation is essential for humans to understand new problems, to perform scientific reasoning, to express, to clarify and to defend their opinions in their daily lives. Argumentation mining aims to detect the arguments presented in a text document, the relations between them and the internal structure of each individual argument. In this paper we analyse the main research questions when dealing with argumentation mining and the different methods we have studied and developed in order to successfully confront the challenges of argumentation mining in legal texts.",
"title": ""
},
{
"docid": "5793cf03753f498a649c417e410c325e",
"text": "The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and then move to the iterative scheme called Iterative Join-Graph Propagation (IJGP), that combines both iteration and bounded inference. Algorithm IJGP belongs to the class of Generalized Belief Propagation algorithms, a framework that allowed connections with approximate algorithms from statistical physics and is shown empirically to surpass the performance of mini-clustering and belief propagation, as well as a number of other state-of-the-art algorithms on several classes of networks. We also provide insight into the accuracy of iterative BP and IJGP by relating these algorithms to well known classes of constraint propagation schemes.",
"title": ""
},
{
"docid": "b1960cfe66e08bac1d4ff790ecfb0190",
"text": "Cloud federations are a new collaboration paradigm where organizations share data across their private cloud infrastructures. However, the adoption of cloud federations is hindered by federated organizations' concerns on potential risks of data leakage and data misuse. For cloud federations to be viable, federated organizations' privacy concerns should be alleviated by providing mechanisms that allow organizations to control which users from other federated organizations can access which data. We propose a novel identity and access management system for cloud federations. The system allows federated organizations to enforce attribute-based access control policies on their data in a privacy-preserving fashion. Users are granted access to federated data when their identity attributes match the policies, but without revealing their attributes to the federated organization owning data. The system also guarantees the integrity of the policy evaluation process by using block chain technology and Intel SGX trusted hardware. It uses block chain to ensure that users identity attributes and access control policies cannot be modified by a malicious user, while Intel SGX protects the integrity and confidentiality of the policy enforcement process. We present the access control protocol, the system architecture and discuss future extensions.",
"title": ""
},
{
"docid": "b7e78ca489cdfb8efad03961247e12f2",
"text": "ASR short for Automatic Speech Recognition is the process of converting a spoken speech into text that can be manipulated by a computer. Although ASR has several applications, it is still erroneous and imprecise especially if used in a harsh surrounding wherein the input speech is of low quality. This paper proposes a post-editing ASR error correction method and algorithm based on Bing’s online spelling suggestion. In this approach, the ASR recognized output text is spell-checked using Bing’s spelling suggestion technology to detect and correct misrecognized words. More specifically, the proposed algorithm breaks down the ASR output text into several word-tokens that are submitted as search queries to Bing search engine. A returned spelling suggestion implies that a query is misspelled; and thus it is replaced by the suggested correction; otherwise, no correction is performed and the algorithm continues with the next token until all tokens get validated. Experiments carried out on various speeches in different languages indicated a successful decrease in the number of ASR errors and an improvement in the overall error correction rate. Future research can improve upon the proposed algorithm so much so that it can be parallelized to take advantage of multiprocessor computers. KeywordsSpeech Recognition; Error Correction; Bing Spelling",
"title": ""
},
{
"docid": "7431ee071307189e58b5c7a9ce3a2189",
"text": "Among tangible threats and vulnerabilities facing current biometric systems are spoofing attacks. A spoofing attack occurs when a person tries to masquerade as someone else by falsifying data and thereby gaining illegitimate access and advantages. Recently, an increasing attention has been given to this research problem. This can be attested by the growing number of articles and the various competitions that appear in major biometric forums. We have recently participated in a large consortium (TABULARASA) dealing with the vulnerabilities of existing biometric systems to spoofing attacks with the aim of assessing the impact of spoofing attacks, proposing new countermeasures, setting standards/protocols, and recording databases for the analysis of spoofing attacks to a wide range of biometrics including face, voice, gait, fingerprints, retina, iris, vein, electro-physiological signals (EEG and ECG). The goal of this position paper is to share the lessons learned about spoofing and anti-spoofing in face biometrics, and to highlight open issues and future directions.",
"title": ""
},
{
"docid": "8a22660b73d11ee9c634579527049d43",
"text": "Current unsupervised image-to-image translation techniques struggle to focus their attention on individual objects without altering the background or the way multiple objects interact within a scene. Motivated by the important role of attention in human perception, we tackle this limitation by introducing unsupervised attention mechanisms that are jointly adversarially trained with the generators and discriminators. We demonstrate qualitatively and quantitatively that our approach attends to relevant regions in the image without requiring supervision, which creates more realistic mappings when compared to those of recent approaches. Input Ours CycleGAN [1] RA [2] DiscoGAN [3] UNIT [4] DualGAN [5] Figure 1: By explicitly modeling attention, our algorithm is able to better alter the object of interest in unsupervised image-to-image translation tasks, without changing the background at the same time.",
"title": ""
},
{
"docid": "ec593c78e3b2bc8f9b8a657093daac49",
"text": "Analyses of 3-D seismic data in predominantly basin-floor settings offshore Indonesia, Nigeria, and the Gulf of Mexico, reveal the extensive presence of gravity-flow depositional elements. Five key elements were observed: (1) turbidity-flow leveed channels, (2) channeloverbank sediment waves and levees, (3) frontal splays or distributarychannel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets. Each depositional element displays a unique morphology and seismic expression. The reservoir architecture of each of these depositional elements is a function of the interaction between sedimentary process, sea-floor morphology, and sediment grain-size distribution. (1) Turbidity-flow leveed-channel widths range from greater than 3 km to less than 200 m. Sinuosity ranges from moderate to high, and channel meanders in most instances migrate down-system. The highamplitude reflection character that commonly characterizes these features suggests the presence of sand within the channels. In some instances, high-sinuosity channels are associated with (2) channel-overbank sediment-wave development in proximal overbank levee settings, especially in association with outer channel bends. These sediment waves reach heights of 20 m and spacings of 2–3 km. The crests of these sediment waves are oriented normal to the inferred transport direction of turbidity flows, and the waves have migrated in an upflow direction. Channel-margin levee thickness decreases systematically down-system. Where levee thickness can no longer be resolved seismically, high-sinuosity channels feed (3) frontal splays or low-sinuosity, distributary-channel complexes. Low-sinuosity distributary-channel complexes are expressed as lobate sheets up to 5–10 km wide and tens of kilometers long that extend to the distal edges of these systems. They likely comprise sheet-like sandstone units consisting of shallow channelized and associated sand-rich overbank deposits. Also observed are (4) crevasse-splay deposits, which form as a result of the breaching of levees, commonly at channel bends. Similar to frontal splays, but smaller in size, these deposits commonly are characterized by sheet-like turbidites. (5) Debris-flow deposits comprise low-sinuosity channel fills, narrow elongate lobes, and sheets and are characterized seismically by contorted, chaotic, low-amplitude reflection patterns. These deposits commonly overlie striated or grooved pavements that can be up to tens of kilometers long, 15 m deep, and 25 m wide. Where flows are unconfined, striation patterns suggest that divergent flow is common. Debris-flow deposits extend as far basinward as turbidites, and individual debris-flow units can reach 80 m in thickness and commonly are marked by steep edges. Transparent to chaotic seismic reflection character suggest that these deposits are mud-rich. Stratigraphically, deep-water basin-floor successions commonly are characterized by mass-transport deposits at the base, overlain by turbidite frontal-splay deposits and subsequently by leveed-channel deposits. Capping this succession is another mass-transport unit ultimately overlain and draped by condensed-section deposits. This succession can be related to a cycle of relative sea-level change and associated events at the corresponding shelf edge. Commonly, deposition of a deep-water sequence is initiated with the onset of relative sea-level fall and ends with subsequent rapid relative sea-level rise. INTRODUCTION The understanding of deep-water depositional systems has advanced significantly in recent years. In the past, much understanding of deep-water sedimentation came from studies of outcrops, recent fan systems, and 2D reflection seismic data (Bouma 1962; Mutti and Ricci Lucchi 1972; Normark 1970, 1978; Walker 1978; Posamentier et al. 1991; Weimer 1991; Mutti and Normark 1991). However, in recent years this knowledge has advanced significantly because of (1) the interest by petroleum companies in deep-water exploration (e.g., Pirmez et al. 2000), and the advent of widely available high-quality 3D seismic data across a broad range of deepwater environments (e.g., Beaubouef and Friedman 2000; Posamentier et al. 2000), (2) the recent drilling and coring of both near-surface and reservoir-level deep-water systems (e.g., Twichell et al. 1992), and (3) the increasing utilization of deep-tow side-scan sonar and other imaging devices (e.g., Twichell et al. 1992; Kenyon and Millington 1995). It is arguably the first factor that has had the most significant impact on our understanding of deep-water systems. Three-dimensional seismic data afford an unparalleled view of the deep-water depositional environment, in some instances with vertical resolution down to 2–3 m. Seismic time slices, horizon-datum time slices, and interval attributes provide images of deepwater depositional systems in map view that can then be analyzed from a geomorphologic perspective. Geomorphologic analyses lead to the identification of depositional elements, which, when integrated with seismic profiles, can yield significant stratigraphic insight. Finally, calibration by correlation with borehole data, including logs, conventional core, and biostratigraphic samples, can provide the interpreter with an improved understanding of the geology of deep-water systems. The focus of this study is the deep-water component of a depositional sequence. We describe and discuss only those elements and stratigraphic successions that are present in deep-water depositional environments. The examples shown in this study largely are Pleistocene in age and most are encountered within the uppermost 400 m of substrate. These relatively shallowly buried features represent the full range of lowstand deep-water depositional sequences from early and late lowstand through transgressive and highstand deposits. Because they are not buried deeply, these stratigraphic units commonly are well-imaged on 3D seismic data. It is also noteworthy that although the examples shown here largely are of Pleistocene age, the age of these deposits should not play a significant role in subsequent discussion. What determines the architecture of deep-water deposits are the controlling parameters of flow discharge, sand-to-mud ratio, slope length, slope gradient, and rugosity of the seafloor, and not the age of the deposits. It does not matter whether these deposits are Pleistocene, Carboniferous, or Precambrian; the physical ‘‘first principles’’ of sediment gravity flow apply without distinguishing between when these deposits formed. However, from the perspective of studying deep-water turbidites it is advantageous that the Pleistocene was such an active time in the deepwater environment, resulting in deposition of numerous shallowly buried, well-imaged, deep-water systems. Depositional Elements Approach This study is based on the grouping of similar geomorphic features referred to as depositional elements. Depositional elements are defined by 368 H.W. POSAMENTIER AND V. KOLLA FIG. 1.—Schematic depiction of principal depositional elements in deep-water settings. Mutti and Normark (1991) as the basic mappable components of both modern and ancient turbidite systems and stages that can be recognized in marine, outcrop, and subsurface studies. These features are the building blocks of landscapes. The focus of this study is to use 3D seismic data to characterize the geomorphology and stratigraphy of deep-water depositional elements and infer process of deposition where appropriate. Depositional elements can vary from place to place and in the same place through time with changes of environmental parameters such as sand-to-mud ratio, flow discharge, and slope gradient. In some instances, systematic changes in these environmental parameters can be tied back to changes of relative sea level. The following depositional elements will be discussed: (1) turbidityflow leveed channels, (2) overbank sediment waves and levees, (3) frontal splays or distributary-channel complexes, (4) crevasse-splay complexes, and (5) debris-flow channels, lobes, and sheets (Fig. 1). Each element is described and depositional processes are discussed. Finally, the exploration significance of each depositional element is reviewed. Examples are drawn from three deep-water slope and basin-floor settings: the Gulf of Mexico, offshore Nigeria, and offshore eastern Kalimantan, Indonesia. We utilized various visualization techniques, including 3D perspective views, horizon slices, and horizon and interval attribute displays, to bring out the detailed characteristics of depositional elements and their respective geologic settings. The deep-water depositional elements we present here are commonly characterized by peak seismic frequencies in excess of 100 Hz. The vertical resolution at these shallow depths of burial is in the range of 3–4 m, thus affording high-resolution images of depositional elements. We hope that our study, based on observations from the shallow subsurface, will provide general insights into the reservoir architecture of deep-water depositional elements, which can be extrapolated to more poorly resolved deep-water systems encountered at deeper exploration depths. DEPOSITIONAL ELEMENTS The following discussion focuses on five depositional elements in deepwater environments. These include turbidity-flow leveed channels, overbank or levee deposits, frontal splays or distributary-channel complexes, crevasse splays, and debris-flow sheets, lobes, and channels (Fig. 1). Turbidity-Flow Leveed Channels Leveed channels are common depositional elements in slope and basinfloor environments. Leveed channels observed in this study range in width from 3 km to less than 250 m and in sinuosity (i.e., the ratio of channelaxis length to channel-belt length) between 1.2 and 2.2. Some leveed channels are internally characterized by complex cut-and-fill architecture. Many leveed channels show evidence ",
"title": ""
}
] | scidocsrr |
f9f0451cc4a70707c49c6cdcb6508136 | Patient outcome prediction via convolutional neural networks based on multi-granularity medical concept embedding | [
{
"docid": "897a6d208785b144b5d59e4f346134cd",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "42c890832d861ad2854fd1f56b13eb45",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] | [
{
"docid": "6ec3c98e78e78303a0dc0068ab90a17d",
"text": "INTRODUCTION\nIn this study we report a large series of patients with unilateral winged scapula (WS), with special attention to long thoracic nerve (LTN) palsy.\n\n\nMETHODS\nClinical and electrodiagnostic data were collected from 128 patients over a 25-year period.\n\n\nRESULTS\nCauses of unilateral WS were LTN palsy (n = 70), spinal accessory nerve (SAN) palsy (n = 39), both LTN and SAN palsy (n = 5), facioscapulohumeral dystrophy (FSH) (n = 5), orthopedic causes (n = 11), voluntary WS (n = 6), and no definite cause (n = 2). LTN palsy was related to neuralgic amyotrophy (NA) in 61 patients and involved the right side in 62 patients.\n\n\nDISCUSSION\nClinical data allow for identifying 2 main clinical patterns for LTN and SAN palsy. Electrodiagnostic examination should consider bilateral nerve conduction studies of the LTN and SAN, and needle electromyography of their target muscles. LTN palsy is the most frequent cause of unilateral WS and is usually related to NA. Voluntary WS and FSH must be considered in young patients. Muscle Nerve 57: 913-920, 2018.",
"title": ""
},
{
"docid": "a986826041730d953dfbf9fbc1b115a6",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
},
{
"docid": "49fbe9ddc3087c26ecc373c6731fca77",
"text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didnt consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.",
"title": ""
},
{
"docid": "c29a2429d6dd7bef7761daf96a29daaf",
"text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "d580f60d48331b37c55f1e9634b48826",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "ca17638b251d20cca2973a3f551b822f",
"text": "The first edition of Artificial Intelligence: A Modern Approach has become a classic in the AI literature. It has been adopted by over 600 universities in 60 countries, and has been praised as the definitive synthesis of the field. In the second edition, every chapter has been extensively rewritten. Significant new material has been introduced to cover areas such as constraint satisfaction, fast propositional inference, planning graphs, internet agents, exact probabilistic inference, Markov Chain Monte Carlo techniques, Kalman filters, ensemble learning methods, statistical learning, probabilistic natural language models, probabilistic robotics, and ethical aspects of AI. The book is supported by a suite of online resources including source code, figures, lecture slides, a directory of over 800 links to \"AI on the Web,\" and an online discussion group. All of this is available at: aima.cs.berkeley.edu.",
"title": ""
},
{
"docid": "4e263764fd14f643f7b414bc12615565",
"text": "We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.",
"title": ""
},
{
"docid": "7afa24cc5aa346b79436c1b9b7b15b23",
"text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.",
"title": ""
},
{
"docid": "f7d728041dacdd701d2e9700864121ae",
"text": "This article analyzes late-life depression, looking carefully at what defines a person as elderly, the incidence of late-life depression, complications and differences in symptoms between young and old patients with depression, subsyndromal depression, bipolar depression in the elderly, the relationship between grief and depression, along with sleep disturbances and suicidal ideation.",
"title": ""
},
{
"docid": "b8322d65e61be7fb252b2e418df85d3e",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "d646a27556108caebd7ee5691c98d642",
"text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.",
"title": ""
},
{
"docid": "67974bd363f89a9da77b2e09851905d3",
"text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "66dda817ec57dfe5b2acb611fdb0101c",
"text": "Magnetometers and accelerometers are sensors that are now integrated in objects of everyday life like automotive applications, mobile phones and so on. Some applications need information of acceleration and attitude with a high accuracy. For example, MEMS magnetometers and accelerometers can be integrated in embedded like mobile phones and GPS receivers. The parameters of such sensors must be precisely estimated to avoid drift and biased values. Thus, calibration is an important step to correctly use these sensors and get the expected measurements. This paper presents the theoretical and experimental steps of a method to compute gains, bias and non orthogonality factors of magnetometer and accelerometer sensors. This method of calibration can be used for automatic calibration in embedded systems. The calibration procedure involves arbitrary rotations of the sensors platform and a visual 2D projection of measurements.",
"title": ""
},
{
"docid": "57502ae793808fded7d446a3bb82ca74",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "cc6c485fdd8d4d61c7b68bfd94639047",
"text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.",
"title": ""
},
{
"docid": "0c9bbeaa783b2d6270c735f004ecc47f",
"text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected]. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected].",
"title": ""
},
{
"docid": "2793f528a9b29345b1ee8ce1202933e3",
"text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.",
"title": ""
},
{
"docid": "f5e934d65fa436cdb8e5cfa81ea29028",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "3b302ce4b5b8b42a61c7c4c25c0f3cbf",
"text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.",
"title": ""
},
{
"docid": "842ee1e812d408df7e6f7dfd95e32a36",
"text": "Abstract Phase segregation, the process by which the components of a binary mixture spontaneously separate, is a key process in the evolution and design of many chemical, mechanical, and biological systems. In this work, we present a data-driven approach for the learning, modeling, and prediction of phase segregation. A direct mapping between an initially dispersed, immiscible binary fluid and the equilibrium concentration field is learned by conditional generative convolutional neural networks. Concentration field predictions by the deep learning model conserve phase fraction, correctly predict phase transition, and reproduce area, perimeter, and total free energy distributions up to 98% accuracy.",
"title": ""
}
] | scidocsrr |
ee3f6043d2b4fc2c1ab7bf983cd18563 | Performance analysis of data security algorithms used in the railway traffic control systems | [
{
"docid": "34ceb0e84b4e000b721f87bcbec21094",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
}
] | [
{
"docid": "debb7f6f8e00b536dd823c4b513f5950",
"text": "It is known that in the Tower of Ha noi graphs there are at most two different shortest paths between any fixed pair of vertices. A formula is given that counts, for a given vertex v, thenumber of verticesu such that there are two shortest u, v-paths. The formul a is expressed in terms of Stern’s diatomic sequenceb(n) (n ≥ 0) and implies that only for vertices of degree two this number is zero. Plane embeddings of the Tower of Hanoi graphs are also presented that provide an explicit description ofb(n) as the number of elements of the sets of vertices of the Tower of Hanoi graphs intersected by certain lines in the plane. © 2004 Elsevier Ltd. All rights reserved. MSC (2000):05A15; 05C12; 11B83; 51M15",
"title": ""
},
{
"docid": "1145d2375414afbdd5f1e6e703638028",
"text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).",
"title": ""
},
{
"docid": "def621d47a8ead24754b1eebe590314a",
"text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.",
"title": ""
},
{
"docid": "6dd81725ffdb5a90c9f02c4faca784a3",
"text": "In 1989 the IT function of the exploration and production division of British Petroleum Company set out to transform itself in response to a severe economic environment and poor internal perceptions of IT performance. This case study traces and analyzes the changes made over six years. The authors derive a model of the transformed IT organization comprising seven components which they suggest can guide IT departments in general as they seek to reform themselves in the late 1990's. This model is seen to fit well with recent thinking on general management in that the seven components of change can be reclassified into the Bartlett and Ghoshal (1994) framework of Purpose, Process and People. Some suggestions are made on how to apply the model in other organizations.",
"title": ""
},
{
"docid": "eee51fc5cd3bee512b01193fa396e19a",
"text": "Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]",
"title": ""
},
{
"docid": "87785a3cd233389e23f4773f24c17d1d",
"text": "Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies. Our model achieves median error of less than 1% across several high-performance policies on both synthetic and SPEC CPU2006 benchmarks. Finally, we present a case study showing how to use the model to improve shared cache performance.",
"title": ""
},
{
"docid": "88785ff4fe8ff37edebbf8c74f8e2465",
"text": "We propose a data-driven method for automatic deception detection in real-life trial data using visual and verbal cues. Using OpenFace with facial action unit recognition, we analyze the movement of facial features of the witness when posed with questions and the acoustic patterns using OpenSmile. We then perform a lexical analysis on the spoken words, emphasizing the use of pauses and utterance breaks, feeding that to a Support Vector Machine to test deceit or truth prediction. We then try out a method to incorporate utterance-based fusion of visual and lexical analysis, using string based matching.",
"title": ""
},
{
"docid": "8ba7352e7726f47be779a699a422ecb5",
"text": "Autonomous driving has attracted tremendous attention especially in the past few years. The key techniques for a self-driving car include solving tasks like 3D map construction, self-localization, parsing the driving road and understanding objects, which enable vehicles to reason and act. However, large scale data set for training and system evaluation is still a bottleneck for developing robust perception models. In this paper, we present the ApolloScape dataset [1] and its applications for autonomous driving. Compared with existing public datasets from real scenes, e.g. KITTI [2] or Cityscapes [3], ApolloScape contains much large and richer labelling including holistic semantic dense point cloud for each site, stereo, per-pixel semantic labelling, lanemark labelling, instance segmentation, 3D car instance, high accurate location for every frame in various driving videos from multiple sites, cities and daytimes. For each task, it contains at lease 15x larger amount of images than SOTA datasets. To label such a complete dataset, we develop various tools and algorithms specified for each task to accelerate the labelling process, such as 3D-2D segment labeling tools, active labelling in videos etc. Depend on ApolloScape, we are able to develop algorithms jointly consider the learning and inference of multiple tasks. In this paper, we provide a sensor fusion scheme integrating camera videos, consumer-grade motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robust self-localization and semantic segmentation for autonomous driving. We show that practically, sensor fusion and joint learning of multiple tasks are beneficial to achieve a more robust and accurate system. We expect our dataset and proposed relevant algorithms can support and motivate researchers for further development of multi-sensor fusion and multi-task learning in the field of computer vision.",
"title": ""
},
{
"docid": "89263084f29469d1c363da55c600a971",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "4adfc2bf6907305fc4da20a5b753c2b1",
"text": "Book recommendation systems can benefit commercial websites, social media sites, and digital libraries, to name a few, by alleviating the knowledge acquisition process of users who look for books that are appealing to them. Even though existing book recommenders, which are based on either collaborative filtering, text content, or the hybrid approach, aid users in locating books (among the millions available), their recommendations are not personalized enough to meet users’ expectations due to their collective assumption on group preference and/or exact content matching, which is a failure. To address this problem, we have developed PBRecS, a book recommendation system that is based on social interactions and personal interests to suggest books appealing to users. PBRecS relies on the friendships established on a social networking site, such as LibraryThing, to generate more personalized suggestions by including in the recommendations solely books that belong to a user’s friends who share common interests with the user, in addition to applying word-correlation factors for partially matching book tags to disclose books similar in contents. The conducted empirical study on data extracted from LibraryThing has verified (i) the effectiveness of PBRecS using social-media data to improve the quality of book recommendations and (ii) that PBRecS outperforms the recommenders employed by Amazon and LibraryThing.",
"title": ""
},
{
"docid": "64fbffe75209359b540617fac4930c44",
"text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.",
"title": ""
},
{
"docid": "238b49907eb577647354e4145f4b1e7e",
"text": "The work here presented contributes to the development of ground target tracking control systems for fixed wing unmanned aerial vehicles (UAVs). The control laws are derived at the kinematic level, relying on a commercial inner loop controller onboard that accepts commands in indicated air speed and bank, and appropriately sets the control surface deflections and thrust in order to follow those references in the presence of unknown wind. Position and velocity of the target on the ground is assumed to be known. The algorithm proposed derives from a path following control law that enables the UAV to converge to a circumference centered at the target and moving with it, thus keeping the UAV in the vicinity of the target even if the target moves at a velocity lower than the UAV stall speed. If the target speed is close to the UAV speed, the control law behaves similar to a controller that tracks a particular T. Oliveira Science Laboratory, Portuguese Air Force Academy, Sintra, 2715-021, Portugal e-mail: [email protected] P. Encarnação (B) Faculty of Engineering, Catholic University of Portugal, Rio de Mouro, 2635-631, Portugal e-mail: [email protected] point on the circumference centered at the target position. Real flight tests results show the good performance of the control scheme presented.",
"title": ""
},
{
"docid": "4d5461e076839bf2364a190808959acb",
"text": "environment, are becoming increasingly prevalent. However, if agents are to behave intelligently in complex, dynamic, and noisy environments, we believe that they must be able to learn and adapt. The reinforcement learning (RL) paradigm is a popular way for such agents to learn from experience with minimal feedback. One of the central questions in RL is how best to generalize knowledge to successfully learn and adapt. In reinforcement learning problems, agents sequentially observe their state and execute actions. The goal is to maximize a real-valued reward signal, which may be time delayed. For example, an agent could learn to play a game by being told what the state of the board is, what the legal actions are, and then whether it wins or loses at the end of the game. However, unlike in supervised learning scenarios, the agent is never provided the “correct” action. Instead, the agent can only gather data by interacting with an environment, receiving information about the results, its actions, and the reward signal. RL is often used because of the framework’s flexibility and due to the development of increasingly data-efficient algorithms. RL agents learn by interacting with the environment, gathering data. If the agent is virtual and acts in a simulated environment, training data can be collected at the expense of computer time. However, if the agent is physical, or the agent must act on a “real-world” problem where the online reward is critical, such data can be expensive. For instance, a physical robot will degrade over time and must be replaced, and an agent learning to automate a company’s operations may lose money while training. When RL agents begin learning tabula rasa, mastering difficult tasks may be infeasible, as they require significant amounts of data even when using state-of-the-art RL approaches. There are many contemporary approaches to speed up “vanilla” RL methods. Transfer learning (TL) is one such technique. Transfer learning is an umbrella term used when knowledge is Articles",
"title": ""
},
{
"docid": "e84e83443d65498a7ea37669122389e5",
"text": "In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f . Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.",
"title": ""
},
{
"docid": "28ece47474132a3f8df9aa39be02d194",
"text": "The degree of heavy metal (Hg, Cr, Cd, and Pb) pollution in honeybees (Apis mellifera) was investigated in several sampling sites around central Italy including both polluted and wildlife areas. The honeybee readily inhabits all environmental compartments, such as soil, vegetation, air, and water, and actively forages the area around the hive. Therefore, if it functions in a polluted environment, plant products used by bees may also be contaminated, and as a result, also a part of these pollutants will accumulate in the organism. The bees, foragers in particular, are good biological indicators that quickly detect the chemical impairment of the environment by the high mortality and the presence of pollutants in their body or in beehive products. The experiment was carried out using 24 colonies of honeybees bred in hives dislocated whether within urban areas or in wide countryside areas. Metals were analyzed on the foragers during all spring and summer seasons, when the bees were active. Results showed no presence of mercury in all samples analyzed, but honeybees accumulated several amounts of lead, chromium, and cadmium. Pb reported a statistically significant difference among the stations located in urban areas and those in the natural reserves, showing the highest values in honeybees collected from hives located in Ciampino area (Rome), next to the airport. The mean value for this sampling station was 0.52 mg kg−1, and July and September were characterized by the highest concentrations of Pb. Cd also showed statistically significant differences among areas, while for Cr no statistically significant differences were found.",
"title": ""
},
{
"docid": "3e83dd048f23e63982c5766690661fe9",
"text": "The Reactor design pattern handles service requests that are delivered concurrently to an application by one or more clients. Each service in an application may consist of serveral methods and is represented by a separate event handler that is responsible for dispatching service-specific requests. Dispatching of event handlers is performed by an initiation dispatcher, which manages the registered event handlers. Demultiplexing of service requests is performed by a synchronous event demultiplexer.",
"title": ""
},
{
"docid": "bb2c7c7d064eebcef527efe93a7c873b",
"text": "We have proposed and verified an efficient architecture for a high-speed I/O transceiver design that implements far-end crosstalk (FEXT) cancellation. In this design, TX pre-emphasis, used traditionally to reduce ISI, is combined with FEXT cancellation at the transmitter to remove crosstalk-induced jitter and interference. The architecture has been verified via simulation models based on channel measurement. A prototype implementation of a 12.8Gbps source-synchronous serial link transmitter has been developed in TSMC's 0.18mum CMOS technology. The proposed design consists of three 12.8Gbps data lines that uses a half-rate PLL clock of 6.4GHz. The chip includes a PRBS generator to simplify multi-lane testing. Simulation results show that, even with a 2times reduction in line separation, FEXT cancellation can successfully reduce jitter by 51.2 %UI and widen the eye by 14.5%. The 2.5 times 1.5 mm2 core consumes 630mW per lane at 12.8Gbps with a 1.8V supply",
"title": ""
},
{
"docid": "dfd5de557cbd3338aa2321e4f7aeca1c",
"text": "N Engl J Med 2005;353:1387-94. Copyright © 2005 Massachusetts Medical Society. A 56-year-old man was referred to the transplantation infectious-disease clinic because of a low-grade fever and left axillary lymphadenopathy. The patient had received a cadaveric kidney transplant five years earlier for polycystic kidney disease. He had been in his usual state of health until three weeks before the referral to the infectious-disease clinic, when he discovered palpable, tender lymph nodes in the left epitrochlear region and axilla. Ten days later a low-grade fever, dry cough, nasal congestion, and night sweats developed, for which trimethoprim–sulfamethoxazole was prescribed, without benefit. He was referred to a specialist in infectious diseases. The patient did not have headache, sore throat, chest or abdominal pain, dyspnea, diarrhea, or dysuria. He had hypertension, gout, nephrolithiasis, gastroesophageal reflux disease, and prostate cancer, which had been treated with radiation therapy two years earlier. He was a policeman who worked in an office. He had not traveled outside of the United States recently. He had acquired a kitten several months earlier and recalled receiving multiple scratches on his hands when he played with it. His medications were cyclosporine (325 mg daily), mycophenolate mofetil (2 g daily), amlodipine, furosemide, colchicine, doxazosin, and pravastatin. Prednisone had been discontinued one year previously. He reported no allergies to medications. The temperature was 36.0°C and the blood pressure 105/75 mm Hg. On physical examination, the patient appeared well. The head, neck, lungs, heart, and abdomen were unremarkable. On the dorsum of the left hand was a single, violaceous nodule with a flat, necrotic eschar on top (Fig. 1); there was no erythema, fluctuance, pus, or other drainage, and there was no sinus tract. The patient said that this lesion had nearly healed, but that he had been scratching it and thought that this irritation prevented it from healing. There was a tender left epitrochlear lymph node, 2 cm by 2 cm, and a mass of matted, tender lymph nodes, 5 cm in diameter, in the left axilla. There was no lymphangitic streaking or cellulitis. The results of a complete blood count revealed no abnormalities (Table 1). Additional laboratory studies were obtained, and clarithromycin (500 mg, twice a day) was prescribed. Within a day of starting treatment, the patient’s temperature rose to 39.4°C, and the fever was accompanied by shaking chills. He was admitted to the hospital. The temperature was 38.6°C, the pulse was 78 beats per minute, and the blood pressure was 100/60 mm Hg. The results of a physical examination were unchanged presentation of case",
"title": ""
},
{
"docid": "1152fde10a30dc0d28838988d5207a34",
"text": "The ability to write diverse poems in different styles under the same poetic imagery is an important characteristic of human poetry writing. Most previous works on automatic Chinese poetry generation focused on improving the coherency among lines. Some work explored style transfer but suffered from expensive expert labeling of poem styles. In this paper, we target on stylistic poetry generation in a fully unsupervised manner for the first time. We propose a novel model which requires no supervised style labeling by incorporating mutual information, a concept in information theory, into modeling. Experimental results show that our model is able to generate stylistic poems without losing fluency and coherency.",
"title": ""
},
{
"docid": "a1b387e3199aa1c70fa07196426af256",
"text": "Hyperbolic embeddings offer excellent quality with few dimensions when embedding hierarchical data structures. We give a combinatorial construction that embeds trees into hyperbolic space with arbitrarily low distortion without optimization. On WordNet, this algorithm obtains a meanaverage-precision of 0.989 with only two dimensions, outperforming existing work by 0.11 points. We provide bounds characterizing the precisiondimensionality tradeoff inherent in any hyperbolic embedding. To embed general metric spaces, we propose a hyperbolic generalization of multidimensional scaling (h-MDS). We show how to perform exact recovery of hyperbolic points from distances, provide a perturbation analysis, and give a recovery result that enables us to reduce dimensionality. Finally, we extract lessons from the algorithms and theory above to design a scalable PyTorch-based implementation that can handle incomplete information.",
"title": ""
}
] | scidocsrr |
2e590cd3be228d7cf9aee71e74806c5e | Aerodynamic Loads on Tall Buildings : Interactive Database | [
{
"docid": "c49ae120bca82ef0d9e94115ad7107f2",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: [email protected] 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "6658b58ae09cfb3fbbafc77a87744e4f",
"text": "Wind loads on structures under the buffeting action of wind gusts have traditionally been treated by the ‘‘gust loading factor’’ (GLF) method in most major codes and standards around the world. In this scheme, the equivalent-static wind loading used for design is equal to the mean wind force multiplied by the GLF. Although the traditional GLF method ensures an accurate estimation of the displacement response, it may fall short in providing a reliable estimate of other response components. To overcome this shortcoming, a more consistent procedure for determining design loads on tall structures is proposed. This paper highlights an alternative model, in which the GLF is based on the base bending moment rather than the displacement. The expected extreme base moment is computed by multiplying the mean base moment by the proposed GLF. The base moment is then distributed to each floor in terms of the floor load in a format that is very similar to the one used to distribute the base shear in earthquake engineering practice. In addition, a simple relationship between the proposed base moment GLF and the traditional GLF is derived, which makes it convenient to employ the proposed approach while utilizing the existing background information. Numerical examples are presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. This paper also extends the new framework for the formulation of wind load effects in the acrosswind and torsional directions along the ‘‘GLF’’ format that has generally been used for the alongwind response. A 3D GLF concept is advanced, which draws upon a database of aerodynamic wind loads on typical tall buildings, a mode shape correction procedure and a more realistic formulation of the equivalent-static wind loads and their effects. A numerical example is presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. It is envisaged that the proposed formulation will be most appropriate for inclusion in codes and standards. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "198967b505c9ded9255bff7b82fb2781",
"text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.",
"title": ""
},
{
"docid": "8e3b1f49ca8a5afe20a9b66e0088a56a",
"text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.",
"title": ""
},
{
"docid": "b66a2ce976a145827b5b9a5dd2ad2495",
"text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.",
"title": ""
},
{
"docid": "5f5d4ea7915639ca401d2354bfdb0704",
"text": "In next generation cellular networks, cloud computing will have profound impacts on mobile wireless communications. On the one hand, the integration of cloud computing into the mobile environment enables MCC systems. On the other hand, the powerful computing platforms in the cloud for radio access networks lead to a novel concept of C-RAN. In this article we study the topology configuration and rate allocation problem in C-RAN with the objective of optimizing the end-to-end performance of MCC users in next generation cellular networks. We use a decision theoretical approach to tackle the delayed channel state information problem in C-RAN. Simulation results show that the design and operation of future mobile wireless networks can be significantly affected by cloud computing, and the proposed scheme is capable of achieving substantial performance gains over existing schemes.",
"title": ""
},
{
"docid": "af97cf19ca86e1d66b8a81c4b71ff763",
"text": "The mechanisms of anterior cruciate ligament (ACL) injuries are still inconclusive from an epidemiological standpoint. An epidemiological approach in a large sample group over an appropriate period of years will be necessary to enhance the current knowledge of the ACL injury mechanism. The objective of the study was to investigate the ACL injury occurrence in a large sample over twenty years and demonstrate the relationships between the ACL injury occurrence and the dynamic knee alignment at the time of the injury. We investigated the activity, the injury mechanism, and the dynamic knee alignment at the time of the injury in 1,718 patients diagnosed as having the ACL injuries. Regarding the activity at the time of the injury, \"competition \"was the most common, accounting for about half of all the injuries. The current result also showed that the noncontact injury was the most common, which was observed especially in many female athletes. Finally, the dynamic alignment of \"Knee-in & Toe- out \"(i.e. dynamic knee valgus) was the most common, accounting for about half. These results enhance our understanding of the ACL injury mechanism and may be used to guide future injury prevention strategies. Key pointsWe investigated the situation of ACL injury occurrence, especially dynamic alignments at the time of injury, in 1,718 patients who had visited our institution for surgery and physical therapy for twenty years.Our epidemiological study of the large patient group revealed that \"knee-in & toe-out \"alignment was the most frequently seen at the time of the ACL injury.From an epidemiological standpoint, we need to pay much attention to avoiding \"Knee-in & Toe-out \"alignment during sports activities.",
"title": ""
},
{
"docid": "8326f993dbb83e631d2e6892e03520e7",
"text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.",
"title": ""
},
{
"docid": "04231c12db08408b7207b751a4ad7420",
"text": "The fabrication of digital Integrated Circuits (ICs) is increasingly outsourced. Given this trend, security is recognized as an important issue. The threat agent is an attacker at the IC foundry that has information about the circuit and inserts covert, malicious circuitry. The use of 3D IC technology has been suggested as a possible technique to counter this threat. However, to our knowledge, there is no prior work on how such technology can be used effectively. We propose a way to use 3D IC technology for security in this context. Specifically, we obfuscate the circuit by lifting wires to a trusted tier, which is fabricated separately. This is referred to as split manufacturing. For this setting, we provide a precise notion of security, that we call k-security, and a characterization of the underlying computational problems and their complexity. We further propose a concrete approach for identifying sets of wires to be lifted, and the corresponding security they provide. We conclude with a comprehensive empirical assessment with benchmark circuits that highlights the security versus cost trade-offs introduced by 3D IC based circuit obfuscation.",
"title": ""
},
{
"docid": "52ce8c1259050f403723ec38782898f1",
"text": "Indian population is growing very fast and is responsible for posing various environmental risks like traffic noise which is the primitive contributor to the overall noise pollution in urban environment. So, an attempt has been made to develop a web enabled application for spatio-temporal semantic analysis of traffic noise of one of the urban road segments in India. Initially, a traffic noise model was proposed for the study area based on the Calixto model. Later, a City Geographic Markup Language (CityGML) model, which is an OGC encoding standard for 3D data representation, was developed and stored into PostGIS. A web GIS framework was implemented for simulation of traffic noise level mapped on building walls using the data from PostGIS. Finally, spatio-temporal semantic analysis to quantify the effects in terms of threshold noise level, number of walls and roofs affected from start to the end of the day, was performed.",
"title": ""
},
{
"docid": "7d42d3d197a4d62e1b4c0f3c08be14a9",
"text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.",
"title": ""
},
{
"docid": "21c1be0458cc6908c3f7feb6591af841",
"text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …",
"title": ""
},
{
"docid": "3c1db6405945425c61495dd578afd83f",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "ac040c0c04351ea6487ea6663688ebd6",
"text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.",
"title": ""
},
{
"docid": "00f4af13461c5f6d15d6883afc50c1d1",
"text": "In order to solve the problem that the long cycle and the repetitive work in the process of designing the industrial robot, a modular manipulator system developed for general industrial applications is introduced in this paper. When the application scene is changed, the corresponding robotic modules can be selected to assemble a new robot configuration that meets the requirements. The modules can be divided into two categories: joint modules and link modules. Joint modules consist of three types of revolute joint modules with different torque, and link modules mainly contain T link module and L link module. By connection of different types of modules, various of configurations can be achieved. Considering the traditional 6-DoF manipulators are difficult to meet the needs of the unstructured industrial applications, a 7-DoF redundant manipulator prototype is designed on the basis of the robotic modules.",
"title": ""
},
{
"docid": "a9a3d46bd6f5df951957ddc57d3d390d",
"text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.",
"title": ""
},
{
"docid": "7974d3e3e9c431256ee35c3032288bd1",
"text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.",
"title": ""
},
{
"docid": "73dcb2e355679f2e466029fbbb24a726",
"text": "Many of the world's most popular websites catalyze their growth through invitations from existing members. New members can then in turn issue invitations, and so on, creating cascades of member signups that can spread on a global scale. Although these diffusive invitation processes are critical to the popularity and growth of many websites, they have rarely been studied, and their properties remain elusive. For instance, it is not known how viral these cascades structures are, how cascades grow over time, or how diffusive growth affects the resulting distribution of member characteristics present on the site. In this paper, we study the diffusion of LinkedIn, an online professional network comprising over 332 million members, a large fraction of whom joined the site as part of a signup cascade. First we analyze the structural patterns of these signup cascades, and find them to be qualitatively different from previously studied information diffusion cascades. We also examine how signup cascades grow over time, and observe that diffusion via invitations on LinkedIn occurs over much longer timescales than are typically associated with other types of online diffusion. Finally, we connect the cascade structures with rich individual-level attribute data to investigate the interplay between the two. Using novel techniques to study the role of homophily in diffusion, we find striking differences between the local, edge-wise homophily and the global, cascade-level homophily we observe in our data, suggesting that signup cascades form surprisingly coherent groups of members.",
"title": ""
},
{
"docid": "7bc06c5b4fbdbd996f580b8c87b0b949",
"text": "Video streaming over HTTP is becoming the de facto dominating paradigm for today's video applications. HTTP as an over-the-top (OTT) protocol has been leveraged for quality video traversal over the Internet. High user-received quality-of-experience (QoE) is driven not only by the new technology, but also by a wide range of user demands. Given the limitation of a traditional TCP/IP network for supporting video transmission, the typical on-off transfer pattern is inevitable. Dynamic adaptive streaming over HTTP (DASH) establishes a simple architecture and enables new video applications to fully utilize the exiting physical network infrastructure. By deploying robust adaptive algorithms at the client side, DASH can provide a smooth streaming experience. We propose a dynamic adaptive algorithm in order to keep a high QoE for the average user's experience. We formulated our QoE optimization in a set of key factors. The results obtained by our empirical network traces show that our approach not only achieves a high average QoE but it also works stably under different network conditions.",
"title": ""
},
{
"docid": "d2c42797307ca5d8e1c706afe510f316",
"text": "The continued amalgamation of cloud technologies into all aspects of our daily lives and the technologies we use (i.e. cloud-of-things) creates business opportunities, security and privacy risks, and investigative challenges (in the event of a cybersecurity incident). This study examines the extent to which data acquisition fromWindows phone, a common cloud-of-thing device, is supported by three popular mobile forensics tools. The effect of device settings modification (i.e. enabling screen lock and device reset operations) and alternative acquisition processes (i.e. individual and combined acquisition) on the extraction results are also examined. Our results show that current mobile forensic tool support for Windows Phone 8 remains limited. The results also showed that logical acquisition support was more complete in comparison to physical acquisition support. In one example, the tool was able to complete a physical acquisition of a Nokia Lumia 625, but its deleted contacts and SMSs could not be recovered/extracted. In addition we found that separate acquisition is needed for device removable media to maximize acquisition results, particularly when trying to recover deleted data. Furthermore, enabling flight-mode and disabling location services are highly recommended to eliminate the potential for data alteration during the acquisition process. These results should provide practitioners with an overview of the current capability of mobile forensic tools and the challenges in successfully extracting evidence from the Windows phone platform. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "95df9ceddf114060d981415c0b1d6125",
"text": "This paper presents a comparative study of different neural network models for forecasting the weather of Vancouver, British Columbia, Canada. For developing the models, we used one year’s data comprising of daily maximum and minimum temperature, and wind-speed. We used Multi-Layered Perceptron (MLP) and an Elman Recurrent Neural Network (ERNN), which were trained using the one-step-secant and LevenbergMarquardt algorithms. To ensure the effectiveness of neurocomputing techniques, we also tested the different connectionist models using a different training and test data set. Our goal is to develop an accurate and reliable predictive model for weather analysis. Radial Basis Function Network (RBFN) exhibits a good universal approximation capability and high learning convergence rate of weights in the hidden and output layers. Experimental results obtained have shown RBFN produced the most accurate forecast model as compared to ERNN and MLP networks.",
"title": ""
},
{
"docid": "c25b4015787e56f241cabf5e76cb3cc6",
"text": "Clients with generalized anxiety disorder (GAD) received either (a) applied relaxation and self-control desensitization, (b) cognitive therapy, or (c) a combination of these methods. Treatment resulted in significant improvement in anxiety and depression that was maintained for 2 years. The large majority no longer met diagnostic criteria; a minority sought further treatment during follow-up. No differences in outcome were found between conditions; review of the GAD therapy literature suggested that this may have been due to strong effects generated by each component condition. Finally, interpersonal difficulties remaining at posttherapy, measured by the Inventory of Interpersonal Problems Circumplex Scales (L. E. Alden, J. S. Wiggins, & A. L. Pincus, 1990) in a subset of clients, were negatively associated with posttherapy and follow-up improvement, suggesting the possible utility of adding interpersonal treatment to cognitive-behavioral therapy to increase therapeutic effectiveness.",
"title": ""
}
] | scidocsrr |
2ea568f59e106cacc0f641e706e5cbe4 | An In-depth Comparison of Subgraph Isomorphism Algorithms in Graph Databases | [
{
"docid": "b307d2577dcdd13236446c2938e36b73",
"text": "We invesrigare new appmaches for frequent graph-based patrem mining in graph darasers andpmpose a novel ofgorirhm called gSpan (graph-based,Tubsmrure parrern mining), which discovers frequenr subsrrucrures z h o u r candidate generorion. &an builds a new lexicographic or. der among graphs, and maps each graph to a unique minimum DFS code as irs canonical label. Based on rhis lexicographic orde,: &an adopts rhe deprh-jrsr search srraregy ro mine frequenr cannecred subgraphs eflciently. Our performance study shows rhar gSpan subsianriolly outperforms previous algorithm, somerimes by an order of magnirude.",
"title": ""
}
] | [
{
"docid": "a35aa35c57698d2518e3485ec7649c66",
"text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf",
"title": ""
},
{
"docid": "2943f1d374a6a63ef1b140a83e5a8caf",
"text": "Gill morphometric and gill plasticity of the air-breathing striped catfish (Pangasianodon hypophthalmus) exposed to different temperatures (present day 27°C and future 33°C) and different air saturation levels (92% and 35%) during 6weeks were investigated using vertical sections to estimate the respiratory lamellae surface areas, harmonic mean barrier thicknesses, and gill component volumes. Gill respiratory surface area (SA) and harmonic mean water - blood barrier thicknesses (HM) of the fish were strongly affected by both environmental temperature and oxygen level. Thus initial values for 27°C normoxic fish (12.4±0.8g) were 211.8±21.6mm2g-1 and 1.67±0.12μm for SA and HM respectively. After 5weeks in same conditions or in the combinations of 33°C and/or PO2 of 55mmHg, this initial surface area scaled allometrically with size for the 33°C hypoxic group, whereas branchial SA was almost eliminated in the 27°C normoxic group, with other groups intermediate. In addition, elevated temperature had an astounding effect on growth with the 33°C group growing nearly 8-fold faster than the 27°C fish.",
"title": ""
},
{
"docid": "d52bfde050e6535645c324e7006a50e7",
"text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.",
"title": ""
},
{
"docid": "fab399a613acab4965fc29dd178ecb80",
"text": "Maritime transportation is accountable for 2.7% of the worlds CO emissions and the liner shipping industry is committed to a slow steaming policy to provide low cost and environmentally conscious global transport of goods without compromising the level of service. The potential for making cost effective and energy efficient liner shipping networks using operations research is huge and neglected. The implementation of logistic planning tools based upon operations research has enhanced performance of both airlines, railways and general transportation companies, but within the field of liner shipping very little operations research has been done. We believe that access to domain knowledge and data is an entry barrier for researchers to approach the important liner shipping network design problem. This paper presents a thorough description of the liner shipping domain applied to network design along with a rich integer programming model based on the services, that constitute the fixed schedule of a liner shipping company. The model may be relaxed as well as decomposed. The design of a benchmark suite of data instances to reflect the business structure of a global liner shipping network is discussed. The paper is motivated by providing easy access to the domain and the data sources of liner shipping for operations researchers in general. A set of data instances with offset in real world data is presented and made available upon request. Future work is to provide computational results for the instances.",
"title": ""
},
{
"docid": "344be59c5bb605dec77e4d7bd105d899",
"text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.",
"title": ""
},
{
"docid": "aaf69cb42fc9d17cf0ae3b80a55f12d6",
"text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.",
"title": ""
},
{
"docid": "adcf1d64887caa6c0811878460018a31",
"text": "For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces.",
"title": ""
},
{
"docid": "4387549562fe2c0833b002d73d9a8330",
"text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.",
"title": ""
},
{
"docid": "1468c2bc1073f5f72226fddf4c3bc0ad",
"text": "To maximize network lifetime in Wireless Sensor Networks (WSNs) the paths for data transfer are selected in such a way that the total energy consumed along the path is minimized. To support high scalability and better data aggregation, sensor nodes are often grouped into disjoint, non overlapping subsets called clusters. Clusters create hierarchical WSNs which incorporate efficient utilization of limited resources of sensor nodes and thus extends network lifetime. The objective of this paper is to present a state of the art survey on clustering algorithms reported in the literature of WSNs. Our paper presents a taxonomy of energy efficient clustering algorithms in WSNs. And also present timeline and description of LEACH and Its descendant in WSNs.",
"title": ""
},
{
"docid": "5d3275250a345b5f8c8a14a394025a31",
"text": "Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.",
"title": ""
},
{
"docid": "cb62164bc5a582be0c45df28d8ebb797",
"text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.",
"title": ""
},
{
"docid": "f4b271c7ee8bfd9f8aa4d4cf84c4efd4",
"text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle",
"title": ""
},
{
"docid": "f3e4892a0cc4bfe895d4b3c26440ee9a",
"text": "A compact dual band-notched ultra-wideband (UWB) multiple-input multiple-output (MIMO) antenna with high isolation is designed on a FR4 substrate (27 × 30 × 0.8 mm3). To improve the input impedance matching and increase the isolation for the frequencies ≥ 4.0 GHz, the two antenna elements with compact size of 5.5 × 11 mm2 are connected to the two protruded ground parts, respectively. A 1/3 λ rectangular metal strip producing a 1.0 λ loop path with the corresponding antenna element is used to obtain the notched frequency from 5.15 to 5.85 GHz. For the rejected band of 3.30-3.70 GHz, a 1/4 λ open slot is etched into the radiator. Moreover, the two protruded ground parts are connected by a compact metal strip to reduce the mutual coupling for the band of 3.0-4.0 GHz. The simulated and measured results show a bandwidth with |S11| ≤ -10 dB, |S21| ≤ -20 dB and frequency ranged from 3.0 to 11.0 GHz excluding the two rejected bands, is achieved, and all the measured and calculated results show the proposed UWB MIMO antenna is a good candidate for UWB MIMO systems.",
"title": ""
},
{
"docid": "eac322eae08da165b436308336aac37a",
"text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.",
"title": ""
},
{
"docid": "814c69ae155f69ee481255434039b00c",
"text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.",
"title": ""
},
{
"docid": "8d7e8ee0f6305d50276d25ce28bcdf9c",
"text": "The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event",
"title": ""
},
{
"docid": "ecf7446713dc92394c16241aa31a8dba",
"text": "Accelerated graphics cards, or Graphics Processing Units (GPUs), have become ubiquitous in recent years. On the right kinds of problems, GPUs greatly surpass CPUs in terms of raw performance. However, because they are difficult to program, GPUs are used only for a narrow class of special-purpose applications; the raw processing power made available by GPUs is unused most of the time.\n This paper presents an extension to a Java JIT compiler that executes suitable code on the GPU instead of the CPU. Both static and dynamic features are used to decide whether it is feasible and beneficial to off-load a piece of code on the GPU. The paper presents a cost model that balances the speedup available from the GPU against the cost of transferring input and output data between main memory and GPU memory. The cost model is parameterized so that it can be applied to different hardware combinations. The paper also presents ways to overcome several obstacles to parallelization inherent in the design of the Java bytecode language: unstructured control flow, the lack of multi-dimensional arrays, the precise exception semantics, and the proliferation of indirect references.",
"title": ""
},
{
"docid": "21c84ab0fb698ad2619e0afc6db44e1a",
"text": "Nanoscale windows in graphene (nanowindows) have the ability to switch between open and closed states, allowing them to become selective, fast, and energy-efficient membranes for molecular separations. These special pores, or nanowindows, are not electrically neutral due to passivation of the carbon edges under ambient conditions, becoming flexible atomic frameworks with functional groups along their rims. Through computer simulations of oxygen, nitrogen, and argon permeation, here we reveal the remarkable nanowindow behavior at the atomic scale: flexible nanowindows have a thousand times higher permeability than conventional membranes and at least twice their selectivity for oxygen/nitrogen separation. Also, weakly interacting functional groups open or close the nanowindow with their thermal vibrations to selectively control permeation. This selective fast permeation of oxygen, nitrogen, and argon in very restricted nanowindows suggests alternatives for future air separation membranes. Graphene with nanowindows can have 1000 times higher permeability and four times the selectivity for air separation than conventional membranes, Vallejos-Burgos et al. reveal by molecular simulation, due to flexibility at the nanoscale and thermal vibrations of the nanowindows' functional groups.",
"title": ""
},
{
"docid": "93e43e11c10e39880c68d2fb0fccd634",
"text": "In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.",
"title": ""
},
{
"docid": "651db77789c5f5edaa933534255c88d6",
"text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.",
"title": ""
}
] | scidocsrr |
06bb094ed964bfe2f811b3f64da3a733 | Evaluating the robustness of repeated measures analyses: the case of small sample sizes and nonnormal data. | [
{
"docid": "9e8e57ef22d3dfe139f4b9c9992b0884",
"text": "It has been suggested that when the variance assumptions of a repeated measures ANOVA are not met, the df of the mean square ratio should be adjusted by the sample estimate of the Box correction factor, e. This procedure works well when e is low, but the estimate is seriously biased when this is not the case. An alternate estimate is proposed which is shown by Monte Carlo methods to be less biased for moderately large e.",
"title": ""
}
] | [
{
"docid": "10d01b461ed80fbca4340a193fe47701",
"text": "Flight delays have a significant impact on the nationpsilas economy. Taxi-out delays in particular constitute a significant portion of the block time of a flight. In the future, it can be expected that accurate predictions of dasiawheels-offpsila time may be used in determining whether an aircraft can meet its allocated slot time, thereby fitting into an en-route traffic flow. Without an accurate taxi-out time prediction for departures, there is no way to effectively manage fuel consumption, emissions, or cost. Dynamically changing operations at the airport makes it difficult to accurately predict taxi-out time. This paper describes a method for estimating average taxi-out times at the airport in 15 minute intervals of the day and at least 15 minutes in advance of aircraft scheduled gate push-back time. A probabilistic framework of stochastic dynamic programming with a learning-based solution strategy called Reinforcement Learning (RL) has been applied. Historic data from the Federal Aviation Administrationpsilas (FAA) Aviation System Performance Metrics (ASPM) database were used to train and test the algorithm. The algorithm was tested on John F. Kennedy International airport (JFK), one of the busiest, challenging, and difficult to predict airports in the United States that significantly influences operations across the entire National Airspace System (NAS). Due to the nature of departure operations at JFK the prediction accuracy of the algorithm for a given day was analyzed in two separate time periods (1) before 4:00 P.M and (2) after 4:00 P.M. On an average across 15 days, the predicted average taxi-out times matched the actual average taxi-out times within plusmn5 minutes for about 65 % of the time (for the period before 4:00 P.M) and 53 % of the time (for the period after 4:00 P.M). The prediction accuracy over the entire day within plusmn5 minutes range of accuracy was about 60 %. Further, application of the RL algorithm to estimate taxi-out times at airports with multi-dependent static surface surveillance data will likely improve the accuracy of prediction. The implications of these results for airline operations and network flow planning are discussed.",
"title": ""
},
{
"docid": "65e273d046a8120532d8cd04bcadca56",
"text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.",
"title": ""
},
{
"docid": "7c4768707a3efd3791520576a8a78e23",
"text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.",
"title": ""
},
{
"docid": "f074965ee3a1d6122f1e68f49fd11d84",
"text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.",
"title": ""
},
{
"docid": "ad06ed03454635bf390ea14847fcf4a2",
"text": "Mitochondria are important cellular organelles in most metabolic processes and have a highly dynamic nature, undergoing frequent fission and fusion. The dynamic balance between fission and fusion plays critical roles in mitochondrial functions. In recent studies, several large GTPases have been identified as key molecular factors in mitochondrial fission and fusion. Moreover, the posttranslational modifications of these large GTPases, including phosphorylation, ubiquitination and SUMOylation, have been shown to be involved in the regulation of mitochondrial dynamics. Neurons are particularly sensitive and vulnerable to any abnormalities in mitochondrial dynamics, due to their large energy demand and long extended processes. Emerging evidences have thus indicated a strong linkage between mitochondria and neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease and Huntington's disease. In this review, we will describe the regulation of mitochondrial dynamics and its role in neurodegenerative diseases.",
"title": ""
},
{
"docid": "28d739449d55d77e54571edb3c4ec4ad",
"text": "Immunologic checkpoint blockade with antibodies that target cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and the programmed cell death protein 1 pathway (PD-1/PD-L1) have demonstrated promise in a variety of malignancies. Ipilimumab (CTLA-4) and pembrolizumab (PD-1) are approved by the US Food and Drug Administration for the treatment of advanced melanoma, and additional regulatory approvals are expected across the oncologic spectrum for a variety of other agents that target these pathways. Treatment with both CTLA-4 and PD-1/PD-L1 blockade is associated with a unique pattern of adverse events called immune-related adverse events, and occasionally, unusual kinetics of tumor response are seen. Combination approaches involving CTLA-4 and PD-1/PD-L1 blockade are being investigated to determine whether they enhance the efficacy of either approach alone. Principles learned during the development of CTLA-4 and PD-1/PD-L1 approaches will likely be used as new immunologic checkpoint blocking antibodies begin clinical investigation.",
"title": ""
},
{
"docid": "5a8649a0418dbeb68cc5bfb7f98f28fe",
"text": "Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.",
"title": ""
},
{
"docid": "75b654084c7205b209d41a33b9bc03b9",
"text": "The aims of the study were to evaluate the per- and post-operative complications and outcomes after cystocele repair with transobturator mesh. A retrospective continuous series study was conducted over a period of 3 years. Clinical evaluation was up to 1 year with additional telephonic interview performed after 34 months on average. When stress urinary incontinence (SUI) was associated with the cystocele, it was treated with the same mesh. One hundred twenty-three patients were treated for cystocele. Per-operative complications occurred in six patients. After 1 year, erosion rate was 6.5%, and only three cystoceles recurred. After treatment of SUI with the same mesh, 87.7% restored continence. Overall patient’s satisfaction rate was 93.5%. Treatment of cystocele using transobturator four arms mesh appears to reduce the risk of recurrence at 1 year, along with high rate of patient’s satisfaction. The transobturator path of the prosthesis arms seems devoid of serious per- and post-operative risks and allows restoring continence when SUI is present.",
"title": ""
},
{
"docid": "b1ae4cfe9ce7a88eb0a503bfafe9606d",
"text": "The aim of Chapter 2 is to give an overview of the GPR basic principles and technology. A lot of definitions and often-used terms that will be used throughout the whole work will be explained here. Readers who are familiar with GPR and the demining application can skip parts of this chapter. Section 2.2.4 however can be interesting since a description of the hardware and the design parameters of a time domain GPR are given there. The description is far from complete, but it gives a good overview of the technological difficulties encountered in GPR systems.",
"title": ""
},
{
"docid": "1700821e3c9ec22ec151d151f3ac7925",
"text": "This review provides a comprehensive examination of the literature surrounding the current state of K–12 distance education. The growth in K–12 distance education follows in the footsteps of expanded learning opportunities at all levels of public education and training in corporate environments. Implementation has been accomplished with a limited research base, often drawing from studies in adult distance education and policies adapted from traditional learning environments. This review of literature provides an overview of the field of distance education with a focus on the research conducted in K–12 distance education environments. (",
"title": ""
},
{
"docid": "913478fa2a53363c4d8dc6212c960cbf",
"text": "The rapidly growing world energy use has already raised concerns over supply difficulties, exhaustion of energy resources and heavy environmental impacts (ozone layer depletion, global warming, climate change, etc.). The global contribution from buildings towards energy consumption, both residential and commercial, has steadily increased reaching figures between 20% and 40% in developed countries, and has exceeded the other major sectors: industrial and transportation. Growth in population, increasing demand for building services and comfort levels, together with the rise in time spent inside buildings, assure the upward trend in energy demand will continue in the future. For this reason, energy efficiency in buildings is today a prime objective for energy policy at regional, national and international levels. Among building services, the growth in HVAC systems energy use is particularly significant (50% of building consumption and 20% of total consumption in the USA). This paper analyses available information concerning energy consumption in buildings, and particularly related to HVAC systems. Many questions arise: Is the necessary information available? Which are the main building types? What end uses should be considered in the breakdown? Comparisons between different countries are presented specially for commercial buildings. The case of offices is analysed in deeper detail. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "fb0648489dcf41e98ad617657725a66e",
"text": "In this paper, a triple active bridge converter is proposed. The topology is capable of achieving ZVS across the full load range with wide input voltage while minimizing heavy load conduction losses to increase overall efficiency. This topology comprises three full bridges coupled by a three-winding transformer. At light load, by adjusting the phase shift between two input bridges, all switching devices can maintain ZVS due to a controlled circulating current. At heavy load, the two input bridges work in parallel to reduce conduction loss. The operation principles of this topology are introduced and the ZVS boundaries are derived. Based on analytical models of power loss, a 200W laboratory prototype has been built to verify theoretical considerations.",
"title": ""
},
{
"docid": "f2fed9066ac945ae517aef8ec5bb5c61",
"text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.",
"title": ""
},
{
"docid": "139a89ce2fcdfb987aa3476d3618b919",
"text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.",
"title": ""
},
{
"docid": "7755e8c9234f950d0d5449602269e34b",
"text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.",
"title": ""
},
{
"docid": "3257f01d96bd126bd7e3d6f447e0326d",
"text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.",
"title": ""
},
{
"docid": "4899e13d5c85b63a823db9c4340824e7",
"text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.",
"title": ""
},
{
"docid": "b5b73560481ad29bed07ddf156531561",
"text": "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes, has been investigated for nearly a century, yet it remains controversial. Covariance between relatives may be due not only to genes, but also to shared environments, and most previous models have assumed different degrees of similarity induced by environments specific to twins, to non-twin siblings (henceforth siblings), and to parents and offspring. We now evaluate an alternative model that replaces these three environments by two maternal womb environments, one for twins and another for siblings, along with a common home environment. Meta-analysis of 212 previous studies shows that our ‘maternal-effects’ model fits the data better than the ‘family-environments’ model. Maternal effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. The shared maternal environment may explain the striking correlation between the IQs of twins, especially those of adult twins that were reared apart. IQ heritability increases during early childhood, but whether it stabilizes thereafter remains unclear. A recent study of octogenarians, for instance, suggests that IQ heritability either remains constant through adolescence and adulthood, or continues to increase with age. Although the latter hypothesis has recently been endorsed, it gathers only modest statistical support in our analysis when compared to the maternal-effects hypothesis. Our analysis suggests that it will be important to understand the basis for these maternal effects if ways in which IQ might be increased are to be identified.",
"title": ""
},
{
"docid": "67ae045b8b9a8e181ed0a33b204528cf",
"text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.",
"title": ""
},
{
"docid": "e992ffd4ebbf9d096de092caf476e37d",
"text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.",
"title": ""
}
] | scidocsrr |
67898fc401c5af903c0932453dd10545 | Code Hot Spot: A tool for extraction and analysis of code change history | [
{
"docid": "596fa75533d4d31a49efbeb24f5fa7f0",
"text": "High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",
"title": ""
},
{
"docid": "b776bf3acb830552eb1ecf353b08edee",
"text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.",
"title": ""
}
] | [
{
"docid": "2e088ce4f7e5b3633fa904eab7563875",
"text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.",
"title": ""
},
{
"docid": "0007c9ab00e628848a08565daaf4063e",
"text": "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"title": ""
},
{
"docid": "fae925bdd47b835035d4f8f0b5b3139d",
"text": "By Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin : Network Flows: Theory, Algorithms, and Applications bringing together the classic and the contemporary aspects of the field this comprehensive introduction to network flows provides an integrative view of theory network flows pearson new international edition theory algorithms and applications on amazon free shipping on qualifying offers Network Flows: Theory, Algorithms, and Applications:",
"title": ""
},
{
"docid": "7c0b7d55abdd6cce85730dbf1cd02109",
"text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large",
"title": ""
},
{
"docid": "f0ec66a9054c086e4141cb95995f5f68",
"text": "We present a simple hierarchical Bayesian approach to the modeling collections of texts and other large-scale data collections. For text collections, we posit that a document is generated by choosing a random set of multinomial probabilities for a set of possible “topics,” and then repeatedly generating words by sampling from the topic mixture. This model is intractable for exact probabilistic inference, but approximate posterior probabilities and marginal likelihoods can be obtained via fast variational methods. We also present extensions to coupled models for joint text/image data and multiresolution models for topic hierarchies.",
"title": ""
},
{
"docid": "9ce1401e072fc09749d12f9132aa6b1e",
"text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.",
"title": ""
},
{
"docid": "9b16eaa154370895b446cc4e66c9a8a9",
"text": "The 15 kV SiC N-IGBT is the state-of-the-art high voltage power semiconductor device developed by Cree. The SiC IGBT is exposed to a peak stress of 10-11 kV in power converter systems, with punch-through turn-on dv/dt over 100 kV/μs and turn-off dv/dt about 35 kV/μs. Such high dv/dt requires ultralow coupling capacitance in the dc-dc isolation stage of the gate driver for maintaining fidelity of the signals on the control-supply ground side. Accelerated aging of the insulation in the isolation stage is another serious concern. In this paper, a simple transformer based isolation with a toroid core is investigated for the above requirements of the 15 kV IGBT. The gate driver prototype has been developed with over 100 kV dc insulation capability, and its inter-winding coupling capacitance has been found to be 3.4 pF and 13 pF at 50 MHz and 100 MHz respectively. The performance of the gate driver prototype has been evaluated up to the above mentioned specification using double-pulse tests on high-side IGBT in a half-bridge configuration. The continuous testing at 5 kHz has been performed till 8 kV, and turn-on dv/dt of 85 kV/μs on a buck-boost converter. The corresponding experimental results are presented. Also, the test methodology of evaluating the gate driver at such high voltage, without a high voltage power supply is discussed. Finally, experimental results validating fidelity of the signals on the control-ground side are provided to show the influence of increased inter-winding coupling capacitance on the performance of the gate driver.",
"title": ""
},
{
"docid": "2eba092d19cc8fb35994e045f826e950",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "abdd688f821a450ebe0eb70d720989c2",
"text": "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.",
"title": ""
},
{
"docid": "19b602b49f0fcd51f5ec7f240fe26d60",
"text": "Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.",
"title": ""
},
{
"docid": "36f8d1e7cd7a6e2a68c3dd4336e91da8",
"text": "Although the accuracy of super-resolution (SR) methods based on convolutional neural networks (CNN) soars high, the complexity and computation also explode with the increased depth and width of the network. Thus, we propose the convolutional anchored regression network (CARN) for fast and accurate single image super-resolution (SISR). Inspired by locally linear regression methods (A+ and ARN), the new architecture consists of regression blocks that map input features from one feature space to another. Different from A+ and ARN, CARN is no longer relying on or limited by hand-crafted features. Instead, it is an end-to-end design where all the operations are converted to convolutions so that the key concepts, i.e., features, anchors, and regressors, are learned jointly. The experiments show that CARN achieves the best speed and accuracy trade-off among the SR methods. The code is available at https://github.com/ofsoundof/CARN.",
"title": ""
},
{
"docid": "ef26995e3979f479f4c3628283816d5d",
"text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.",
"title": ""
},
{
"docid": "bfdcad076ec599716de7d2dc43323059",
"text": "The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine learning methods. In this investigation, we evaluated the C4.5 decision tree, logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) neural network methods, both as single classifiers and combined in a hierarchical classification, for the mapping of nine major summer crops (both woody and herbaceous) from ASTER satellite images captured in two different dates. Each method was built with different combinations of spectral and textural features obtained after the segmentation of the remote images in an object-based framework. As single classifiers, MLP and SVM obtained maximum overall accuracy of 88%, slightly higher than LR (86%) and notably higher than C4.5 (79%). The SVM+SVM classifier (best method) improved these results to 89%. In most cases, the hierarchical classifiers considerably increased the accuracy of the most poorly classified class (minimum sensitivity). The SVM+SVM method offered a significant improvement in classification accuracy for all of the studied crops compared to OPEN ACCESS Remote Sens. 2014, 6 5020 the conventional decision tree classifier, ranging between 4% for safflower and 29% for corn, which suggests the application of object-based image analysis and advanced machine learning methods in complex crop classification tasks.",
"title": ""
},
{
"docid": "90b3e6aee6351b196445843ca8367a3b",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "8a538c63adfd618d8967f736d8c59761",
"text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).",
"title": ""
},
{
"docid": "d14da110523c56d3c1ab2be9d3fbcf8e",
"text": "Women are generally more risk averse than men. We investigated whether between- and within-gender variation in financial risk aversion was accounted for by variation in salivary concentrations of testosterone and in markers of prenatal testosterone exposure in a sample of >500 MBA students. Higher levels of circulating testosterone were associated with lower risk aversion among women, but not among men. At comparably low concentrations of salivary testosterone, however, the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender. A similar relationship between risk aversion and testosterone was also found using markers of prenatal testosterone exposure. Finally, both testosterone levels and risk aversion predicted career choices after graduation: Individuals high in testosterone and low in risk aversion were more likely to choose risky careers in finance. These results suggest that testosterone has both organizational and activational effects on risk-sensitive financial decisions and long-term career choices.",
"title": ""
},
{
"docid": "a3099df83149b84e113d0f12b66e1ab7",
"text": "We propose a multistart CMA-ES with equal budgets for two interlaced restart strategies, one with an increasing population size and one with varying small population sizes. This BI-population CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed and could solve 23, 22 and 20 functions out of 24 in search space dimensions 10, 20 and 40, respectively, within a budget of less than $10^6 D$ function evaluations per trial.",
"title": ""
},
{
"docid": "ee378b32ee744f0377a3723ec00f4313",
"text": "In this article, we present some extensions of the rough set approach and we outline a challenge for the rough set based research. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "9970f1b1d4712353a736806f19ff2f2c",
"text": "Many clustering algorithms suffer from scalability problems on massive datasets and do not support any user interaction during runtime. To tackle these problems, anytime clustering algorithms are proposed. They produce a fast approximate result which is continuously refined during the further run. Also, they can be stopped or suspended anytime and provide an answer. In this paper, we propose a novel anytime clustering algorithm based on the density-based clustering paradigm. Our algorithm called A-DBSCAN is applicable to very high dimensional databases such as time series, trajectory, medical data, etc. The general idea of our algorithm is to use a sequence of lower-bounding functions (LBs) of the true similarity measure to produce multiple approximate results of the true density-based clusters. ADBSCAN operates in multiple levels w.r.t. the LBs and is mainly based on two algorithmic schemes: (1) an efficient distance upgrade scheme which restricts distance calculations to core-objects at each level of the LBs; (2) a local reclustering scheme which restricts update operations to the relevant objects only. Extensive experiments demonstrate that A-DBSCAN acquires very good clustering results at very early stages of execution thus saves a large amount of computational time. Even if it runs to the end, A-DBSCAN is still orders of magnitude faster than DBSCAN.",
"title": ""
}
] | scidocsrr |
20cc5fe9f25a1d5e894095d8fb960111 | Association between substandard classroom ventilation rates and students' academic achievement. | [
{
"docid": "01bf087ff78fb76eab676507d762b80d",
"text": "This meta-analysis reviewed the literature on socioeconomic status (SES) and academic achievement in journal articles published between 1990 and 2000. The sample included 101,157 students, 6,871 schools, and 128 school districts gathered from 74 independent samples. The results showed a medium to strong SES–achievement relation. This relation, however, is moderated by the unit, the source, the range of SES variable, and the type of SES–achievement measure. The relation is also contingent upon school level, minority status, and school location. The author conducted a replica of White’s (1982) meta-analysis to see whether the SES–achievement correlation had changed since White’s initial review was published. The results showed a slight decrease in the average correlation. Practical implications for future research and policy are discussed.",
"title": ""
}
] | [
{
"docid": "c3fe8211d76c12fce10221f97f1028b3",
"text": "Computer architects put significant efforts on the design space exploration of a new processor, as it determines the overall characteristics (e.g., performance, power, cost) of the final product. To thoroughly explore the space and achieve the best results, they need high design evaluation throughput – the ability to quickly assess a large number of designs with minimal costs. Unfortunately, the existing simulators and performance models are either too slow or too inaccurate to meet this demand. As a result, architects often sacrifice the design space coverage to end up with a sub-optimal product. To address this challenge, we propose RpStacks-MT, a methodology to evaluate multi-core processor designs with high throughput. First, we propose a graph-based multi-core performance model, which overcomes the limitations of the existing models to accurately describe a multi-core processor's key performance behaviors. Second, we propose a reuse distance-based memory system model and a dynamic scheduling reconstruction method, which help our graph model to quickly track the performance changes from processor design changes. Lastly, we combine these models with a state of the art design exploration idea to evaluate multiple processor designs in an efficient way. Our evaluations show that RpStacks-MT achieves extremely high design evaluation throughput – 88× higher versus a conventional cycle-level simulator and 18× higher versus an accelerated simulator (on average, for evaluating 10,000 designs) – while maintaining simulator-level accuracy.",
"title": ""
},
{
"docid": "963e2e56265d07b33cfa009434bce943",
"text": "In today’s modern communication industry, antennas are the most important components required to create a communication link. Microstrip antennas are the most suited for aerospace and mobile applications because of their low profile, light weight and low power handling capacity. They can be designed in a variety of shapes in order to obtain enhanced gain and bandwidth, dual band and circular polarization to even ultra wideband operation. The thesis provides a detailed study of the design of probe-fed Rectangular Microstrip Patch Antenna to facilitate dual polarized, dual band operation. The design parameters of the antenna have been calculated using the transmission line model and the cavity model. For the simulation process IE3D electromagnetic software which is based on method of moment (MOM) has been used. The effect of antenna dimensions and substrate parameters on the performance of antenna have been discussed. The antenna has been designed with embedded spur lines and integrated reactive loading for dual band operation with better impedance matching. The designed antenna can be operated at two frequency band with center frequencies 7.62 (with a bandwidth of 11.68%) and 9.37 GHz (with a bandwidth of 9.83%). A cross slot of unequal length has been inserted so as to have dual polarization. This results in a minor shift in the central frequencies of the two bands to 7.81 and 9.28 GHz. At a frequency of 9.16 GHz, circular polarization has been obtained. So the dual band and dual frequency operation has successfully incorporated into a single patch.",
"title": ""
},
{
"docid": "7e8feb5f8d816a0c0626f6fdc4db7c04",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "610fd71d5e866ead56013642ec7ee69e",
"text": "A constructive algorithm is proposed for feed-forward neural networks which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger, more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes to cover the two main modes of the oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weights.",
"title": ""
},
{
"docid": "61f0e20762a8ce5c3c40ea200a32dd43",
"text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality",
"title": ""
},
{
"docid": "00754b8714c81687afb450908d3a3ac1",
"text": "Wearable smart devices are already amongst us. Currently, smartwatches are one of the key drivers of the wearable technology and are being used by a large population of consumers. This paper takes a first look at this increasingly popular technology with a systematic characterization of the smartwatch app markets. We conduct a large scale analysis of three popular smartwatch app markets: Android Wear, Samsung, and Apple, and characterize more than 14,000 smartwatch apps in multiple aspects such as prices, number of developers and categories. Our analysis shows that approximately 41% and 30% of the apps in Android Wear and Samsung app markets are Personalization apps that provide watch faces. Further, we provide a generic taxonomy for apps on all three platforms based on their packaging and modes of communication, that allow us to investigate apps with respect to privacy and security. Finally, we study the privacy risks associated with the app usage by identifying third party trackers integrated into these apps and personal information leakage through network traffic analysis. We show that a higher percentage of Apple apps (62%) are connected to third party trackers compared to Samsung (36%) and Android Wear (46%).",
"title": ""
},
{
"docid": "4949c4698dc9ce7fcea196def92afd06",
"text": "Argumentative text has been analyzed both theoretically and computationally in terms of argumentative structure that consists of argument components (e.g., claims, premises) and their argumentative relations (e.g., support, attack). Less emphasis has been placed on analyzing the semantic types of argument components. We propose a two-tiered annotation scheme to label claims and premises and their semantic types in an online persuasive forum, Change My View, with the long-term goal of understanding what makes a message persuasive. Premises are annotated with the three types of persuasive modes: ethos, logos, pathos, while claims are labeled as interpretation, evaluation, agreement, or disagreement, the latter two designed to account for the dialogical nature of our corpus. We aim to answer three questions: 1) can humans reliably annotate the semantic types of argument components? 2) are types of premises/claims positioned in recurrent orders? and 3) are certain types of claims and/or premises more likely to appear in persuasive messages than in nonpersuasive messages?",
"title": ""
},
{
"docid": "5ce82b8c2cc87ae84026d230f3a97e06",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "720e417783f801e8f97531710b5eb779",
"text": "In this article, a novel Vertical Take-Off and Landing (VTOL) Single Rotor Unmanned Aerial Vehicle (SR-UAV) will be presented. The SRUAV's design properties will be analysed in detail, with respect to technical novelties outlining the merits of such a conceptual approach. The system's model will be mathematically formulated, while a cascaded P-PI and PID-based control structure will be utilized in extensive simulation trials for the preliminary evaluation of the SR-UAV's attitude and translational performance.",
"title": ""
},
{
"docid": "c20da8ccf60fbb753815d006627fa673",
"text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.",
"title": ""
},
{
"docid": "54bae3ac2087dbc7dcba553ce9f2ef2e",
"text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.",
"title": ""
},
{
"docid": "8448f57118fb3db90a4f793cbebc1bc8",
"text": "Motivated by increased concern over energy consumption in modern data centers, we propose a new, distributed computing platform called Nano Data Centers (NaDa). NaDa uses ISP-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure. To evaluate the potential for energy savings in NaDa platform we pick Video-on-Demand (VoD) services. We develop an energy consumption model for VoD in traditional and in NaDa data centers and evaluate this model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios, NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs, and the reduction of network energy consumption as a result of demand and service co-localization in NaDa.",
"title": ""
},
{
"docid": "1538ff59f18c6e6bc98acedb08ab5f78",
"text": "Radar theory and radar system have developed a lot for the last 50 years or so. Recently, a new concept in array radar has been introduced by the multiple-input multiple-output (MIMO) radar, which has the potential to dramatically improve the performance of radars in parameters estimation. While an earlier appeared concept, synthetic impulse and aperture radar (SIAR) is a typical kind of MIMO radar and probes a channel by transmitting multiple signals separated both spectrally and spatially. To the best knowledge of the authors, almost all the analyses available are based on the simple linear array while our SIAR system is based on a circular array. This paper first introduces the recent research and development in and the features of MIMO radars, then discusses our SIAR system as a specific example of MIMO system and finally the unique advantages of SIAR are listed",
"title": ""
},
{
"docid": "9aa1e7c351129fa4a6adb3a8899e518f",
"text": "Thousands of unique non-coding RNA (ncRNA) sequences exist within cells. Work from the past decade has altered our perception of ncRNAs from 'junk' transcriptional products to functional regulatory molecules that mediate cellular processes including chromatin remodelling, transcription, post-transcriptional modifications and signal transduction. The networks in which ncRNAs engage can influence numerous molecular targets to drive specific cell biological responses and fates. Consequently, ncRNAs act as key regulators of physiological programmes in developmental and disease contexts. Particularly relevant in cancer, ncRNAs have been identified as oncogenic drivers and tumour suppressors in every major cancer type. Thus, a deeper understanding of the complex networks of interactions that ncRNAs coordinate would provide a unique opportunity to design better therapeutic interventions.",
"title": ""
},
{
"docid": "04f4058d37a33245abf8ed9acd0af35d",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "833095fbc8c06c5698521420e1aa6a3b",
"text": "In the last two decades, Computer Aided Detection (CAD) systems were developed to help radiologists analyse screening mammograms, however benefits of current CAD technologies appear to be contradictory, therefore they should be improved to be ultimately considered useful. Since 2012, deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95. The approach described here has achieved 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85. When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are published online at https://github.com/riblidezso/frcnn_cad.",
"title": ""
},
{
"docid": "cbb5d9269067ad2bbdb2c9823338d752",
"text": "This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.",
"title": ""
},
{
"docid": "ba0d63c3e6b8807e1a13b36bc30d5d72",
"text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.",
"title": ""
},
{
"docid": "fa320a8347093bca4817da2ed7c54e61",
"text": "Gases for electrical insulation are essential for the operation of electric power equipment. This Review gives a brief history of gaseous insulation that involved the emergence of the most potent industrial greenhouse gas known today, namely sulfur hexafluoride. SF6 paved the way to space-saving equipment for the transmission and distribution of electrical energy. Its ever-rising usage in the electrical grid also played a decisive role in the continuous increase of atmospheric SF6 abundance over the last decades. This Review broadly covers the environmental concerns related to SF6 emissions and assesses the latest generation of eco-friendly replacement gases. They offer great potential for reducing greenhouse gas emissions from electrical equipment but at the same time involve technical trade-offs. The rumors of one or the other being superior seem premature, in particular because of the lack of dielectric, environmental, and chemical information for these relatively novel compounds and their dissociation products during operation.",
"title": ""
},
{
"docid": "7e8976250bd67e07fb71c6dd8b5be414",
"text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.",
"title": ""
}
] | scidocsrr |
24ace342e14da55eed4eaf17c8b148a7 | Kinect v2 Sensor-Based Mobile Terrestrial Laser Scanner for Agricultural Outdoor Applications | [
{
"docid": "5cd68b483657180231786dc5a3407c85",
"text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.",
"title": ""
}
] | [
{
"docid": "f0d3a2b2f3ca6223cab0e222da21fb54",
"text": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.",
"title": ""
},
{
"docid": "c3cc032538a10ab2f58ff45acb6d16d0",
"text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.",
"title": ""
},
{
"docid": "a38ccb15c9fed692ca72c162a5205694",
"text": "In this paper, we utilize tags in Twitter (the hashtags) as an indicator of events. We first study the properties of hashtags for event detection. Based on several observations, we proposed three attributes of hashtags, including (1) instability for temporal analysis, (2) Twitter meme possibility to distinguish social events from virtual topics or memes, and (3) authorship entropy for mining the most contributed authors. Based on these attributes, breaking events are discovered with hashtags, which cover a wide range of social events among different languages in the real world.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "09eb96a9be1c8ee56503881e0fd936d5",
"text": "Essential oils are volatile, natural, complex mixtures of compounds characterized by a strong odour and formed by aromatic plants as secondary metabolites. The chemical composition of the essential oil obtained by hydrodistillation from the whole plant of Pulicaria inuloides grown in Yemen and collected at full flowering stage were analyzed by Gas chromatography-Mass spectrometry (GC-MS). Several oil components were identified based upon comparison of their mass spectral data with those of reference compounds. The main components identified in the oil were 47.34% of 2-Cyclohexen-1-one, 2-methyl-5-(1-methyl with Hexadecanoic acid (CAS) (12.82%) and Ethane, 1,2-diethoxy(9.613%). In this study, mineral contents of whole plant of P. inuloides were determined by atomic absorption spectroscopy. Highest level of K, Mg, Na, Fe and Ca of 159.5, 29.5, 14.2, 13.875 and 5.225 mg/100 g were found in P. inuloides.",
"title": ""
},
{
"docid": "7b82678399bf90fd3b08e85f5a3fc39d",
"text": "Language and vision provide complementary information. Integrating both modalities in a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple and effective method that learns a language-to-vision mapping and uses its output visual predictions to build multimodal representations. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently reconstructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped (or imagined) vectors not only help to fuse multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more human-like judgments. Ultimately, the present work sheds light on fundamental questions of natural language understanding concerning the fusion of vision and language such as the plausibility of more associative and reconstructive approaches.",
"title": ""
},
{
"docid": "350c899dbd0d9ded745b70b6f5e97d19",
"text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.",
"title": ""
},
{
"docid": "55054ba2753651c2f7fc93d1448e0cfe",
"text": "There is an industry need for wideband baluns to operate across several decades of bandwidth covering the HF, VHF, and UHF spectrum. For readers unfamiliar with the term \"balun,\" it is a compound word that combines the terms balanced and unbalanced. This is in reference to the conversion between a balanced source and an unbalanced load, often requiring an impedance transformation of some type. It's common in literature to see the terms \"balanced\" and \"unbalanced\" used interchangeably with the terms \"differential\" and \"single-ended,\" and this article will also share this naming convention. These devices are particularly useful in network matching applications and can be constructed at low cost and a relatively small bill of materials. Wideband baluns first found widespread use converting the balanced load of a dipole antenna to the unbalanced output of a single-ended amplifier. These devices can also be found in solid-state differential circuits such as amplifiers and mixers where network matching is required to achieve the maximum power transfer to the load. In the design of RF power amplifiers, wideband baluns play a critical role in an amplifier's performance, including its input and output impedances, gain flatness, linearity, power efficiency, and many other performance characteristics.This article describes the theory of operation, design procedure, and measured results of the winning wideband balun presented at the 2013 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2013), sponsored by the MTT-17 Technical Coordinating Committee on HF-VHF-UHF technology. The wideband balun was designed to deliver a 4:1 impedance transformation, converting a balanced 100 Ω source to an unbalanced 25 Ω load. It was constructed using a multiaperture ferrite core and a pair of bifilar wires with four parallel turns.",
"title": ""
},
{
"docid": "cda00f4a71564c5dc1ebb99a26d41dbb",
"text": "A new therapeutic approach to the rehabilitation of movement after stroke, termed constraint-induced (CI) movement therapy, has been derived from basic research with monkeys given somatosensory deafferentation. CI movement therapy consists of a family of therapies; their common element is that they induce stroke patients to greatly increase the use of an affected upper extremity for many hours a day over a period of 10 to 14 consecutive days. The signature intervention involves motor restriction of the contralateral upper extremity in a sling and training of the affected arm. The therapies result in large changes in amount of use of the affected arm in the activities of daily living outside of the clinic that have persisted for the 2 years measured to date. Patients who will benefit from Cl therapy can be identified before the beginning of treatment.",
"title": ""
},
{
"docid": "dc33d2edcfb124af607bcb817589f6e9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "dd40063dd10027f827a65976261c8683",
"text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.",
"title": ""
},
{
"docid": "22348f1441faa116cce4b05c45848380",
"text": "In this paper we propose a method for matching the scales of 3D point clouds. 3D point sets of the same scene obtained by 3D reconstruction techniques usually differ in scale. To match scales, we estimate the ratio of scales of two given 3D point clouds. By performing PCA of spin images over different scales of two point clouds, two sets of cumulative contribution rate curves are generated. Such sets of curves can be considered to characterize the scale of the given 3D point clouds. To find the scale ratio of two point clouds, we register the two sets of curves by using a variant of ICP that estimates the ratio of scales. Simulations with the Stanford bunny and experimental results with 3D reconstructions of artificial and real scenes demonstrate that the ratio of any 3D point clouds can be effectively used for scale matching.",
"title": ""
},
{
"docid": "70a94ef8bf6750cdb4603b34f0f1f005",
"text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.",
"title": ""
},
{
"docid": "cd98932832d8821a98032ae6bbef2576",
"text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.",
"title": ""
},
{
"docid": "4f059822d0da0ada039b11c1d65c7c32",
"text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.",
"title": ""
},
{
"docid": "156b2c39337f4fe0847b49fa86dc094b",
"text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.",
"title": ""
},
{
"docid": "2d774ec62cdac08997cb8b86e73fe015",
"text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.",
"title": ""
},
{
"docid": "5c96222feacb0454d353dcaa1f70fb83",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "7c6d2ede54f0445e852b8f9da95fca32",
"text": "In this paper we apply Conformal Prediction (CP) to the k -Nearest Neighbours Regression (k -NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k -Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.",
"title": ""
},
{
"docid": "006793685095c0772a1fe795d3ddbd76",
"text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.",
"title": ""
}
] | scidocsrr |
0a013908ff4b03b4a5a3c690be904efe | Sensing and coverage for a network of heterogeneous robots | [
{
"docid": "45d496fe8762fa52bbf6430eda2b7cfd",
"text": "This paper presents deployment algorithms for multiple mobile robots with line-of-sight sensing and communication capabilities in a simple nonconvex polygonal environment. The objective of the proposed algorithms is to achieve full visibility of the environment. We solve the problem by constructing a novel data structure called the vertex-induced tree and designing schemes to deploy over the nodes of this tree by means of distributed algorithms. The agents are assumed to have access to a local memory and their operation is partially asynchronous",
"title": ""
}
] | [
{
"docid": "f0285873e91d0470e8fbd8ce4430742f",
"text": "Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. CCS Concepts •Computing methodologies → Image processing;",
"title": ""
},
{
"docid": "21ec8a3ea14829c0c21b4caaad08d508",
"text": "OBJECTIVE\nWe investigated the effect of low-fat (2.5%) dahi containing probiotic Lactobacillus acidophilus and Lactobacillus casei on progression of high fructose-induced type 2 diabetes in rats.\n\n\nMETHODS\nDiabetes was induced in male albino Wistar rats by feeding 21% fructose in water. The body weight, food and water intakes, fasting blood glucose, glycosylated hemoglobin, oral glucose tolerance test, plasma insulin, liver glycogen content, and blood lipid profile were recorded. The oxidative status in terms of thiobarbituric acid-reactive substances and reduced glutathione contents in liver and pancreatic tissues were also measured.\n\n\nRESULTS\nValues for blood glucose, glycosylated hemoglobin, glucose intolerance, plasma insulin, liver glycogen, plasma total cholesterol, triacylglycerol, low-density lipoprotein cholesterol, very low-density lipoprotein cholesterol, and blood free fatty acids were increased significantly after 8 wk of high fructose feeding; however, the dahi-supplemented diet restricted the elevation of these parameters in comparison with the high fructose-fed control group. In contrast, high-density lipoprotein cholesterol decreased slightly and was retained in the dahi-fed group. The dahi-fed group also exhibited lower values of thiobarbituric acid-reactive substances and higher values of reduced glutathione in liver and pancreatic tissues compared with the high fructose-fed control group.\n\n\nCONCLUSION\nThe probiotic dahi-supplemented diet significantly delayed the onset of glucose intolerance, hyperglycemia, hyperinsulinemia, dyslipidemia, and oxidative stress in high fructose-induced diabetic rats, indicating a lower risk of diabetes and its complications.",
"title": ""
},
{
"docid": "df02dafb455e2b68035cf8c150e28a0a",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "0418d5ce9f15a91aeaacd65c683f529d",
"text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3e5312f6d3c02d8df2903ea80c1bbae5",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "03d1ffa6be8d26dc03a95fc89ea61943",
"text": "Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a largescale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.",
"title": ""
},
{
"docid": "298df39e9b415bc1eed95ed56d3f32df",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "ed9b027bafedfa9305d11dca49ecc930",
"text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.",
"title": ""
},
{
"docid": "6a94bd02742b43102c25f874ba309bc9",
"text": "Reward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, speci cation of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via uniformization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are speci ed at the SAN level, and solved in a single model. Furthermore, we propose a new technique for discarding paths in the uniformized process whose contribution to the reward variable is minimal, which greatly reduces the time and space required for a solution. A bound is calculated on the error introduced by this discarding, and its e ectiveness is illustrated through the study of the performability and availability of a degradable multi-processor system.",
"title": ""
},
{
"docid": "bc9fcd462ad5c0519731380a2729c0b6",
"text": "We extend the reach of functional encryption schemes that are provably secure under simple assumptions against unbounded collusion to include function-hiding inner product schemes. Our scheme is a private key functional encryption scheme, where ciphertexts correspond to vectors ~x, secret keys correspond to vectors ~y, and a decryptor learns 〈~x, ~y〉. Our scheme employs asymmetric bilinear maps and relies only on the SXDH assumption to satisfy a natural indistinguishability-based security notion where arbitrarily many key and ciphertext vectors can be simultaneously changed as long as the key-ciphertext dot product relationships are all preserved.",
"title": ""
},
{
"docid": "13bd6515467934ba7855f981fd4f1efd",
"text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.",
"title": ""
},
{
"docid": "26032527ca18ef5a8cdeff7988c6389c",
"text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.",
"title": ""
},
{
"docid": "dae9d92671b2379837a9bcd16bb57098",
"text": "Natural locomotion in room-scale virtual reality (VR) is constrained by the user's immediate physical space. To overcome this obstacle, researchers have established the use of the impossible space design mechanic. This game illustrates the applied use of impossible spaces for enhancing the aesthetics of, and presence within, a room-scale VR game. This is done by creating impossible spaces with a gaming narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, a VR game called Ares is put forth as a prototype; and third, a user study is briefly explored.",
"title": ""
},
{
"docid": "40a87654ac33c46f948204fd5c7ef4c1",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "58873aa177cc69d13afa70c413af9efa",
"text": "In vitro drug metabolism studies, which are inexpensive and readily carried out, serve as an adequate screening mechanism to characterize drug metabolites, elucidate their pathways, and make suggestions for further in vivo testing. This publication is a sequel to part I in a series and aims at providing a general framework to guide designs and protocols of the in vitro drug metabolism studies considered good practice in an efficient manner such that it would help researchers avoid common pitfalls and misleading results. The in vitro models include hepatic and non-hepatic microsomes, cDNA-expressed recombinant human CYPs expressed in insect cells or human B lymphoblastoid, chemical P450 inhibitors, S9 fraction, hepatocytes and liver slices. Important conditions for conducting the in vitro drug metabolism studies using these models are stated, including relevant concentrations of enzymes, co-factors, inhibitors and test drugs; time of incubation and sampling in order to establish kinetics of reactions; appropriate control settings, buffer selection and method validation. Separate in vitro data should be logically integrated to explain results from animal and human studies and to provide insights into the nature and consequences of in vivo drug metabolism. This article offers technical information and data and addresses scientific rationales and practical skills related to in vitro evaluation of drug metabolism to meet regulatory requirements for drug development.",
"title": ""
},
{
"docid": "861c78c3886af55657cc21cb9dc8d8f7",
"text": "According the universal serial cyclic redundancy check (CRC) technology, one of the new CRC algorithm based on matrix is referred, which describe an new parallel CRC coding circuit structure with r matrix transformation and pipeline technology. According to the method of parallel CRC coding in high-speed data transmitting, it requires a lot of artificial calculation. Due to the large amount of calculation, it is easy to produce some calculation error. According to the traditional thought of the serial CRC, the algorithm of parallel CRC based on the thought of matrix transformation and iterative has been deduced and expressed. The improved algorithm by pipeline technology has been applied in other systems which require high timing requirements of problem, The design has been implemented through Verilog hardware description language in FPGA device, which has achieved a good validation. It has become a very good method for high-speed CRC coding and decoding.",
"title": ""
},
{
"docid": "70a293a975ec358f48c1b2fda1dfa3eb",
"text": "This paper presents a novel approach for inducing lexical taxonomies automatically from text. We recast the learning problem as that of inferring a hierarchy from a graph whose nodes represent taxonomic terms and edges their degree of relatedness. Our model takes this graph representation as input and fits a taxonomy to it via combination of a maximum likelihood approach with a Monte Carlo Sampling algorithm. Essentially, the method works by sampling hierarchical structures with probability proportional to the likelihood with which they produce the input graph. We use our model to infer a taxonomy over 541 nouns and show that it outperforms popular flat and hierarchical clustering algorithms.",
"title": ""
},
{
"docid": "98f76e0ea0f028a1423e1838bdebdccb",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "bd18a2a92781344dc9821f98559a9c69",
"text": "The increasing complexity of Database Management Systems (DBMSs) and the dearth of their experienced administrators make an urgent call for an Autonomic DBMS that is capable of managing and maintaining itself. In this paper, we examine the characteristics that a DBMS should have in order to be considered autonomic and assess the position of today’s commercial DBMSs such as DB2, SQL Server, and Oracle.",
"title": ""
},
{
"docid": "1bdd050958754ef19dd35f53dd055b5a",
"text": "We present a method for isotropic remeshing of arbitrary genus surfaces. The method is based on a mesh adaptation process, namely, a sequence of local modifications performed on a copy of the original mesh, while referring to the original mesh geometry. The algorithm has three stages. In the first stage the required number or vertices are generated by iterative simplification or refinement. The second stage performs an initial vertex partition using an area-based relaxation method. The third stage achieves precise isotropic vertex sampling prescribed by a given density function on the mesh. We use a modification of Lloyd’s relaxation method to construct a weighted centroidal Voronoi tessellation of the mesh. We apply these iterations locally on small patches of the mesh that are parameterized into the 2D plane. This allows us to handle arbitrary complex meshes with any genus and any number of boundaries. The efficiency and the accuracy of the remeshing process is achieved using a patch-wise parameterization technique. Key-words: Surface mesh generation, isotropic triangle meshing, centroidal Voronoi tessellation, local parameterization. ∗ Technion, Haifa, Israel † INRIA Sophia-Antipolis ‡ Technion, Haifa, Israel Remaillage isotrope de surfaces utilisant une paramétrisation locale Résumé : Cet article décrit une méthode de remaillage isotrope de surfaces triangulées. L’approche repose sur une technique d’adaptation locale du maillage. L’idée consiste à opérer une séquence d’opérations élémentaires sur une copie du maillage original, tout en faisant référence au maillage original pour la géométrie. L’algorithme comporte trois étapes. La première étape ramène la complexité du maillage au nombre de sommets désiré par raffinement ou décimation itérative. La seconde étape opère une première répartition des sommets via une technique de relaxation optimisant un équilibrage local des aires sur les triangles. La troisième étape opère un placement isotrope des sommets via une relaxation de Lloyd pour construire une tessellation de Voronoi centrée. Les itérations de relaxation de Lloyd sont appliquées localement dans un espace paramétrique 2D calculé à la volée sur un sous-ensemble de la triangulation originale de telle que sorte que les triangulations de complexité et de genre arbitraire puissent être efficacement remaillées. Mots-clés : Maillage de surfaces, maillage triangulaire isotrope, diagrammes de Voronoi centrés, paramétrisation locale. Isotropic Remeshing of Surfaces",
"title": ""
}
] | scidocsrr |
fa29448fa3f997481548cc9c99abf421 | Similarity by Composition | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "6a8a849bc8272a7b73259e732e3be81b",
"text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.",
"title": ""
},
{
"docid": "b5a5c48f998f77a56821d03c7f8ad64e",
"text": "A microwave sensor having features useful for the noninvasive determination of blood glucose levels is described. The sensor output is an amplitude only measurement of the standing wave versus frequency sampled at a fixed point on an open-terminated spiral-shaped microstrip line. Test subjects press their thumb against the line and apply contact pressure sufficient to fall within a narrow pressure range. Data are reported for test subjects whose blood glucose is independently measured using a commercial glucometer.",
"title": ""
},
{
"docid": "cebd2d1ae41ea1179256b885cbd13d3d",
"text": "The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix $$\\ell _1$$ ℓ 1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.",
"title": ""
},
{
"docid": "842740ba02fd8d4a515dad3a4acc0c55",
"text": "In this paper we present a multivariate analysis of evoked hemodynamic responses and their spatiotemporal dynamics as measured with fast fMRI. This analysis uses standard multivariate statistics (MANCOVA) and the general linear model to make inferences about effects of interest and canonical variates analysis (CVA) to describe the important features of these effects. We have used these techniques to characterize the form of hemodynamic transients that are evoked during a cognitive or sensorimotor task. In particular we do not assume that the neural or hemodynamic response reaches some \"steady state\" but acknowledge that these physiological changes could show profound task-dependent adaptation and time-dependent changes during the task. To address this issue we have modeled hemodynamic responses using appropriate temporal basis functions and estimated their exact form within the general linear model using MANCOVA. We do not propose that this analysis is a particularly powerful way to make inferences about functional specialization (or more generally functional anatomy) because it only provides statistical inferences about the distributed (whole brain) responses evoked by different conditions. However, its application to characterizing the temporal aspects of evoked hemodynamic responses reveals some compelling and somewhat unexpected perspectives on transient but stereotyped responses to changes in cognitive or sensorimotor processing. The most remarkable observation is that these responses can be biphasic and show profound differences in their form depending on the extant task or condition. Furthermore these differences can be seen in the absence of changes in mean signal.",
"title": ""
},
{
"docid": "22e7479c10d7b963e9dd2cd3aeee6706",
"text": "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter “H” surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter “H”. The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.",
"title": ""
},
{
"docid": "0452e261fcd1a18b49e037493abda496",
"text": "Joint torque sensory feedback is an effective technique for achieving high-performance robot force and motion control. However, most robots are not equipped with joint torque sensors, and it is difficult to add them without changing the joint's mechanical structure. A method for estimating joint torque that exploits the existing structural elasticity of robotic joints with harmonic drive transmission is proposed in this paper. In the presented joint torque estimation method, motor-side and link-side position measurements along with a proposed harmonic drive compliance model, are used to realize stiff and sensitive joint torque estimation, without the need for adding an additional elastic body and using strain gauges to measure the joint torque. The proposed method has been experimentally studied and its performance is compared with measurements of a commercial torque sensor. The results have attested the effectiveness of the proposed torque estimation method.",
"title": ""
},
{
"docid": "28c82ece7caa6e07bf31a143c2d3adbd",
"text": "We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN). The discriminator of an LD-GAN is trained to maximize the linear separability between distributions of hidden representations of generated and targeted samples, while the generator is updated based on the decision hyper-planes computed by performing LDA over the hidden representations. LD-GAN provides a concrete metric of separation capacity for the discriminator, and we experimentally show that it is possible to stabilize the training of LD-GAN simply by calibrating the update frequencies between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights. In the class conditional generation tasks, the proposed method shows improved training stability together with better generalization performance compared to WGAN (Arjovsky et al. 2017) that employs an auxiliary classifier.",
"title": ""
},
{
"docid": "5e7acc47170cbe30d330096b8aa87956",
"text": "For years we have known that cortical neurons collectively have synchronous or oscillatory patterns of activity, the frequencies and temporal dynamics of which are associated with distinct behavioural states. Although the function of these oscillations has remained obscure, recent experimental and theoretical results indicate that correlated fluctuations might be important for cortical processes, such as attention, that control the flow of information in the brain.",
"title": ""
},
{
"docid": "c91c74a262669d0539a37fa7b51938aa",
"text": "BACKGROUND\nBioengineered hyaluronic acid derivatives are currently available that provide for safe and effective soft-tissue augmentation in the comprehensive approach to nonsurgical facial rejuvenation. Current hyaluronic acid fillers do not require preinjection skin testing and produce reproducible, longer-lasting, nonpermanent results compared with other fillers, such as collagen.\n\n\nMETHODS\nA review of the authors' extensive experience at the University of Texas Southwestern Medical Center was conducted to formulate the salient requirements for successful utilization of hyaluronic acid fillers. Indications, technical refinements, and key components for optimized product administration categorized by anatomical location are described. The efficacy and longevity of results are also discussed.\n\n\nRESULTS\nBioengineered hyaluronic acid fillers allow for safe and effective augmentation of selected anatomical regions of the face, when properly administered. Combined treatment with botulinum toxin type A can enhance the effects and longevity by as much as 50 percent. Key components to optimal filler administration include proper anatomical evaluation, changing or combining various fillers based on particle size, altering the depth of injection, using different injection techniques, and coadministration of botulinum toxin type A when indicated. Concomitant administration of hyaluronic acid fillers along with surgical methods of facial rejuvenation can serve as a powerful tool in maximizing a comprehensive treatment plan.\n\n\nCONCLUSIONS\nCurrent techniques in nonsurgical facial rejuvenation and shaping with hyaluronic acid fillers are safe, effective, and long-lasting. Combination regimens that include surgical facial rejuvenation techniques and/or coadministration of botulinum toxin type A further optimize results, leading to greater patient satisfaction.",
"title": ""
},
{
"docid": "35dd6675e287b5e364998ee138677032",
"text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.",
"title": ""
},
{
"docid": "b6f0d75d0bd8c050c391e148367829a4",
"text": "Insufficient supply of animal protein is a major problem in developing countries including Nigeria. Rabbits are adjudged to be a convenient source of palatable and nutritious meat, high in protein, and contain low fat and cholesterol. A doe can produce more than 15 times her own weight in offspring in a year. However, its productivity may be limited by inadequate nutrition. The objective of this study was to determine the effect of probiotic (Saccharomyces cerevisiae) supplementation on growth performance and some hematological parameters of rabbit. The appropriate level of the probiotic inclusion for excellent health status and optimum productivity was also determined. A total of 40 male rabbits were randomly divided into four groups (A–D) of ten rabbits each. Each group was subdivided into two replicates of five rabbits each. They were fed pelleted grower mash ad libitum. The feed for groups A to C were supplemented with bioactive yeast (probiotic) at inclusion levels of 0.08, 0.12, and 0.16 g yeast/kg diet, respectively. Group D had no yeast (control). Daily feed intake was determined. The rabbits were weighed weekly. The packed cell volume (PCV), hemoglobin concentration, white blood cell total, and differential counts were determined at the 8th week, 16th week, and 22nd week following standard procedures. The three results which did not have any significant difference were pooled together. Group A which had 0.08 g yeast/kg of diet had a significantly lower (P ≤ 0.05) PCV than groups B (which had 0.12 g yeast/kg of diet) and C (which had 0.16 g yeast/kg of diet) as well as D (the control). Total WBC count for groups B and C (14.35 ± 0.100 × 103/μl and 14.65 ± 0.786 × 103/μl, respectively) were significantly higher (P ≤ 0.05) than groups A and D (6.33 ± 0.335 × 103/μl and 10.40 ± 0.296 × 103/μl, respectively). Also the absolute neutrophils and lymphocytes counts were significantly higher (P ≤ 0.05) in groups B and C than in groups A and D. Group B had significantly higher (P ≤ 0.05) weight gain (1.025 ± 0.006 kg/rabbit) followed by group A (0.950 ± 0.092 kg/rabbit). The control (group D) had the least weight gain of 0.623 ± 0.0.099 kg/rabbit. These results showed that like most probiotics, bioactive yeast at an appropriate level of inclusion had a significant beneficial effect on health status and growth rate of rabbit. Probiotic supplementation level of 0.12 g yeast/kg of diet was recommended for optimum rabbit production.",
"title": ""
},
{
"docid": "8240e0ebc13c75d774f7cc8576f78bfc",
"text": "We have built an anatomically correct testbed (ACT) hand with the purpose of understanding the intrinsic biomechanical and control features in human hands that are critical for achieving robust, versatile, and dexterous movements, as well as rich object and world exploration. By mimicking the underlying mechanics and controls of the human hand in a hardware platform, our goal is to achieve previously unmatched grasping and manipulation skills. In this paper, the novel constituting mechanisms, unique muscle to joint relationships, and movement demonstrations of the thumb, index finger, middle finger, and wrist of the ACT Hand are presented. The grasping and manipulation abilities of the ACT Hand are also illustrated. The fully functional ACT Hand platform allows for the possibility to design and experiment with novel control algorithms leading to a deeper understanding of human dexterity.",
"title": ""
},
{
"docid": "eac322eae08da165b436308336aac37a",
"text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.",
"title": ""
},
{
"docid": "01835769f2dc9391051869374e200a6a",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
},
{
"docid": "fcd98a7540dd59e74ea71b589c255adb",
"text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.",
"title": ""
},
{
"docid": "292db0e308281a3c1c9be44f76eacc93",
"text": "This paper proposes steganalysis methods for extensions of least-significant bit (LSB) overwriting to both of the two lowest bit planes in digital images: there are two distinct embedding paradigms. The author investigates how detectors for standard LSB replacement can be adapted to such embedding, and how the methods of \"structural steganalysis\", which gives the most sensitive detectors for standard LSB replacement, may be extended and applied to make more sensitive purpose-built detectors for two bit plane steganography. The literature contains only one other detector specialized to detect replacement multiple bits, and those presented here are substantially more sensitive. The author also compares the detectability of standard LSB embedding with the two methods of embedding in the lower two bit planes: although the novel detectors have a high accuracy from the steganographer's point of view, the empirical results indicate that embedding in the two lowest bit planes is preferable (in some cases, highly preferable) to embedding in one",
"title": ""
},
{
"docid": "f9b56de3658ef90b611c78bdb787d85b",
"text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.",
"title": ""
},
{
"docid": "c79c4bdf28ca638161cb82ac9991d5e9",
"text": "This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band.",
"title": ""
},
{
"docid": "5b62ac3acefed74bf82f2c375b10c9e2",
"text": "P2P lending is a new form of lending where in the lenders and borrowers can meet at a common platform like Prosper and ZOPA and strike a best deal. While the borrower looks for a lender who offers the fund at a cheaper interest rate, the lender looks for a borrower whose probability of default is nil or minimal. The peer to peer lending sites can help the lenders judge the borrower by allowing the analysis of the social structures like friendship networks and group affiliations. A particular user can be judged based on his profile and on the information extracted from his social network like borrower's friend's profile and activities (like lending, borrowing and bidding activities). We are using classification algorithm to classify good and bad borrowers, where the input attributes consists of both core credit and social network information. Most of these algorithms only take a single table as input, whereas in the real world most data are stored in multiple tables and managed by relational database systems. Transferring data from multiple tables into a single table, especially merging the social network data causes problems like high redundancy. A simple classifier performs well on real single table data but when applied in a multi-relational (Multi table) setting; its accuracy suffers from the altered statistical information of individual attributes during “join”. Therefore we are using a multi relational Bayesian classification method to predict the default probabilities of borrowers.",
"title": ""
},
{
"docid": "2da9ad29e0b10a8dc8b01a8faf35bb1a",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] | scidocsrr |
50eb728b77c847c39dd859207dc6dcfe | Towards Music Imagery Information Retrieval: Introducing the OpenMIIR Dataset of EEG Recordings from Music Perception and Imagination | [
{
"docid": "b2032f8912fac19b18bc5a836c3536e9",
"text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.",
"title": ""
}
] | [
{
"docid": "bedc7de2ede206905e89daf61828f868",
"text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.",
"title": ""
},
{
"docid": "126b62a0ae62c76b43b4fb49f1bf05cd",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "5a81a087713e3fd530c646f10073de98",
"text": "This study explores the influence of wastewater feedstock composition on hydrothermal liquefaction (HTL) biocrude oil properties and physico-chemical characteristics. Spirulina algae, swine manure, and digested sludge were converted under HTL conditions (300°C, 10-12 MPa, and 30 min reaction time). Biocrude yields ranged from 9.4% (digested sludge) to 32.6% (Spirulina). Although similar higher heating values (32.0-34.7 MJ/kg) were estimated for all product oils, more detailed characterization revealed significant differences in biocrude chemistry. Feedstock composition influenced the individual compounds identified as well as the biocrude functional group chemistry. Molecular weights tracked with obdurate carbohydrate content and followed the order of Spirulina<swine manure<digested sludge. A similar trend was observed in boiling point distributions and the long branched aliphatic contents. These findings show the importance of HTL feedstock composition and highlight the need for better understanding of biocrude chemistries when considering bio-oil uses and upgrading requirements.",
"title": ""
},
{
"docid": "601ab07a9169073032e713b0f5251c1b",
"text": "We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more.",
"title": ""
},
{
"docid": "ef345b834b801a36b88d3f462f7c2a0e",
"text": "At the global level of the Big Five, Extraversion and Neuroticism are the strongest predictors of life satisfaction. However, Extraversion and Neuroticism are multifaceted constructs that combine more specific traits. This article examined the contribution of facets of Extraversion and Neuroticism to life satisfaction in four studies. The depression facet of Neuroticism and the positive emotions/cheerfulness facet of Extraversion were the strongest and most consistent predictors of life satisfaction. These two facets often accounted for more variance in life satisfaction than Neuroticism and Extraversion. The findings suggest that measures of depression and positive emotions/cheerfulness are necessary and sufficient to predict life satisfaction from personality traits. The results also lead to a more refined understanding of the specific personality traits that influence life satisfaction: Depression is more important than anxiety or anger and a cheerful temperament is more important than being active or sociable.",
"title": ""
},
{
"docid": "d922dbcdd2fb86e7582a4fb78990990e",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "b10074ccf133a3c18a2029a5fe52f7ff",
"text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.",
"title": ""
},
{
"docid": "5d2eabccd2e9873b00de3d21903f8ba7",
"text": "In prior work we have demonstrated the noise robustness of a novel microphone solution, the PARAT earplug communication terminal. Here we extend that work with results for the ETSI Advanced Front-End and segmental cepstral mean and variance normalization (CMVN). We also propose a method for doing CMVN in the model domain. This removes the need to train models on normalized features, which may significantly extend the applicability of CMVN. The recognition results are comparable to those of the traditional approach.",
"title": ""
},
{
"docid": "c095de72c7cffc19f3b4302c2045525c",
"text": "Reinforcement learning schemes perform direct on-line search in control space. This makes them appropriate for modifying control rules to obtain improvements in the performance of a system. The effectiveness of a reinforcement learning strategy is studied here through the training of a learning classz$er system (LCS) that controls the movement of an autonomous vehicle in simulated paths including left and right turns. The LCS comprises a set of conditionaction rules (classifiers) that compete to control the system and evolve by means of a genetic algorithm (GA). Evolution and operation of classifiers depend upon an appropriate credit assignment mechanism based on reinforcement learning. Different design options and the role of various parameters have been investigated experimentally. The performance of vehicle movement under the proposed evolutionary approach is superior compared with that of other (neural) approaches based on reinforcement learning that have been applied previously to the same benchmark problem.",
"title": ""
},
{
"docid": "038f34588540683674f7ec44325b510a",
"text": "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13% with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods. Fig. 1. 15 different texture-less 3D objects are simultaneously detected with our approach under different poses on heavy cluttered background with partial occlusion. Each detected object is augmented with its 3D model. We also show the corresponding coordinate systems.",
"title": ""
},
{
"docid": "9ce08ed9e7e34ef1f5f12bfbe54e50ea",
"text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.",
"title": ""
},
{
"docid": "85ba4fa537c8486ff0f8bb39ac2553b2",
"text": "Sign language, which is a medium of communication for deaf people, uses manual communication and body language to convey meaning, as opposed to using sound. This paper presents a prototype Malayalam text to sign language translation system. The proposed system takes Malayalam text as input and generates corresponding Sign Language. Output animation is rendered using a computer generated model. This system will help to disseminate information to the deaf people in public utility places like railways, banks, hospitals etc. This will also act as an educational tool in learning Sign Language.",
"title": ""
},
{
"docid": "49f2f870496d34fe379c0b077197bde3",
"text": "Ultra wideband components have been developed using SIW technology. The various components including a GCPW transition with less than 0.4dB insertion loss are developed. In addition to, T and Y-junctions are optimized with relatively wide bandwidth of greater than 63% and 40% respectively that have less than 0.6 dB insertion loss. The developed transition was utilized to design an X-band 8 way power divider that demonstrated excellent performance over a 5 GHz bandwidth with less than ±4º and ±0.9 dB phase and amplitude imbalance, respectively. The developed SIW power divider has a low profile and is particularly suitable for circuits' integration.",
"title": ""
},
{
"docid": "cbc6986bf415292292b7008ae4d13351",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "ebd7f55f11d6fe8e4f439358b8a65eb4",
"text": "This article investigates the problem of Simultaneous Localization and Mapping (SLAM) from the perspective of linear estimation theory. The problem is first formulated in terms of graph embedding: a graph describing robot poses at subsequent instants of time needs be embedded in a three-dimensional space, assuring that the estimated configuration maximizes measurement likelihood. Combining tools belonging to linear estimation and graph theory, a closed-form approximation to the full SLAM problem is proposed, under the assumption that the relative position and the relative orientation measurements are independent. The approach needs no initial guess for optimization and is formally proven to admit solution under the SLAM setup. The resulting estimate can be used as an approximation of the actual nonlinear solution or can be further refined by using it as an initial guess for nonlinear optimization techniques. Finally, the experimental analysis demonstrates that such refinement is often unnecessary, since the linear estimate is already accurate.",
"title": ""
},
{
"docid": "21756eeb425854184ba2ea722a935928",
"text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.",
"title": ""
},
{
"docid": "815e0ad06fdc450aa9ba3f56ab19ab05",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "4c50dd5905ce7e1f772e69673abe1094",
"text": "The wireless industry has been experiencing an explosion of data traffic usage in recent years and is now facing an even bigger challenge, an astounding 1000-fold data traffic increase in a decade. The required traffic increase is in bits per second per square kilometer, which is equivalent to bits per second per Hertz per cell × Hertz × cell per square kilometer. The innovations through higher utilization of the spectrum (bits per second per Hertz per cell) and utilization of more bandwidth (Hertz) are quite limited: spectral efficiency of a point-to-point link is very close to the theoretical limits, and utilization of more bandwidth is a very costly solution in general. Hyper-dense deployment of heterogeneous and small cell networks (HetSNets) that increase cells per square kilometer by deploying more cells in a given area is a very promising technique as it would provide a huge capacity gain by bringing small base stations closer to mobile devices. This article presents a holistic view on hyperdense HetSNets, which include fundamental preference in future wireless systems, and technical challenges and recent technological breakthroughs made in such networks. Advancements in modeling and analysis tools for hyper-dense HetSNets are also introduced with some additional interference mitigation and higher spectrum utilization techniques. This article ends with a promising view on the hyper-dense HetSNets to meet the upcoming 1000× data challenge.",
"title": ""
},
{
"docid": "14a90781132fa3932d41b21b382ba362",
"text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.",
"title": ""
}
] | scidocsrr |
4b2afadf68808bec3edbb2144ea1b547 | AGIL: Learning Attention from Human for Visuomotor Tasks | [
{
"docid": "825b567c1a08d769aa334b707176f607",
"text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.",
"title": ""
},
{
"docid": "24880289ca2b6c31810d28c8363473b3",
"text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"title": ""
}
] | [
{
"docid": "715d63ebb1316f7c35fd98871297b7d9",
"text": "1. Associate Professor of Oncology of the State University of Ceará; Clinical Director of the Cancer Hospital of Ceará 2. Resident in Urology of Urology Department of the Federal University of Ceará 3. Associate Professor of Urology of the State University of Ceará; Assistant of the Division of Uro-Oncology, Cancer Hospital of Ceará 4. Professor of Urology Department of the Federal University of Ceará; Chief of Division of Uro-Oncology, Cancer Hospital of Ceará",
"title": ""
},
{
"docid": "771611dc99e22b054b936fce49aea7fc",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "3e66d3e2674bdaa00787259ac99c3f68",
"text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.",
"title": ""
},
{
"docid": "b6f9d5015fddbf92ab44ae6ce2f7d613",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "c2195ae053d1bbf712c96a442a911e31",
"text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.",
"title": ""
},
{
"docid": "a158bd5aaf6c1ea9ac2fcf5a77b24627",
"text": "Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.",
"title": ""
},
{
"docid": "42c0f8504f26d46a4cc92d3c19eb900d",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "7440101e3a6ff726c5c7a40f83d25816",
"text": "The polar format algorithm (PFA) for spotlight synthetic aperture radar (SAR) is based on a linear approximation for the differential range to a scatterer. We derive a second-order Taylor series approximation of the differential range. We provide a simple and concise derivation of both the far-field linear approximation of the differential range, which forms the basis of the PFA, and the corresponding approximation limits based on the second-order terms of the approximation.",
"title": ""
},
{
"docid": "3d4afb9ed09fbb6200175e2440b56755",
"text": "A brief account is given of the discovery of abscisic acid (ABA) in roots and root caps of higher plants as well as the techniques by which ABA may be demonstrated in these tissues. The remainder of the review is concerned with examining the rôle of ABA in the regulation of root growth. In this regard, it is well established that when ABA is supplied to roots their elongation is usually inhibited, although at low external concentrations a stimulation of growth may also be found. Fewer observations have been directed at exploring the connection between root growth and the level of naturally occurring, endogenous ABA. Nevertheless, the evidence here also suggests that ABA is an inhibitory regulator of root growth. Moreover, ABA appears to be involved in the differential growth that arises in response to a gravitational stimulus. Recent reports that deny a rôle for ABA in root gravitropism are considered inconclusive. The response of roots to osmotic stress and the changes in ABA levels which ensue, are summarised; so are the interrelations between ABA and other hormones, particularly auxin (e.g. indoleacetic acid); both are considered in the context of the root growth and development. Quantitative changes in auxin and ABA levels may together provide the root with a flexible means of regulating its growth.",
"title": ""
},
{
"docid": "4d0b04f546ab5c0d79bb066b1431ff51",
"text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2fea6378ac23711ffa492a4b9c7dac06",
"text": "This paper proposes an acceleration-based robust controller for the motion control problem, i.e., position and force control problems, of a novel series elastic actuator (SEA). A variable stiffness SEA is designed by using soft and hard springs in series so as to relax the fundamental performance limitation of conventional SEAs. Although the proposed SEA intrinsically has several superiorities in force control, its motion control problem, especially position control problem, is harder than conventional stiff and SEAs due to its special mechanical structure. It is shown that the performance of the novel SEA is limited when conventional motion control methods are used. The performance of the steady-state response is significantly improved by using disturbance observer (DOb), i.e., improving the robustness; however, it degrades the transient response by increasing the vibration at tip point. The vibration of the novel SEA and external disturbances are suppressed by using resonance ratio control (RRC) and arm DOb, respectively. The proposed method can be used in the motion control problem of conventional SEAs as well. The intrinsically safe mechanical structure and high-performance motion control system provide several benefits in industrial applications, e.g., robots can perform dexterous and versatile industrial tasks alongside people in a factory setting. The experimental results show viability of the proposals.",
"title": ""
},
{
"docid": "c95f7046c21eb185c2582a571ed7d6d4",
"text": "In some people, problematic cell phone use can lead to situations in which they lose control, similar to those observed in other cases of addiction. Although different scales have been developed to assess its severity, we lack an instrument that is able to determine the desire or craving associated with it. Thus, with the objective of evaluating craving for cell phone use, in this study, we develop and present the Mobile Phone Addiction Craving Scale (MPACS). It consists of eight Likert-style items, with 10 response options, referring to possible situations in which the interviewee is asked to evaluate the degree of restlessness that he or she feels if the cell phone is unavailable at the moment. It can be self-administered or integrated in an interview when abuse or problems are suspected. With the existence of a single dimension, reflected in the exploratory factor analysis (EFA), the scale presents adequate reliability and internal consistency (α = 0.919). Simultaneously, we are able to show significantly increased correlations (r = 0.785, p = 0.000) with the Mobile Phone Problematic Use Scale (MPPUS) and state anxiety (r = 0.330, p = 0.000). We are also able to find associations with impulsivity, measured using the urgency, premeditation, perseverance, and sensation seeking scale, particularly in the dimensions of negative urgency (r = 0.303, p = 0.000) and positive urgency (r = 0.290, p = 0.000), which confirms its construct validity. The analysis of these results conveys important discriminant validity among the MPPUS user categories that are obtained using the criteria by Chow et al. (1). The MPACS demonstrates higher levels of craving in persons up to 35 years of age, reversing with age. In contrast, we do not find significant differences among the sexes. Finally, a receiver operating characteristic (ROC) analysis allows us to establish the scores from which we are able to determine the different levels of craving, from the absence of craving to that referred to as addiction. Based on these results, we can conclude that this scale is a reliable tool that complements ongoing studies on problematic cell phone use.",
"title": ""
},
{
"docid": "b8d8785968023a38d742abc15c01ee28",
"text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.",
"title": ""
},
{
"docid": "4b3d890a8891cd8c84713b1167383f6f",
"text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.",
"title": ""
},
{
"docid": "7a62e5e29b9450280391a95145216877",
"text": "We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 x 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.",
"title": ""
},
{
"docid": "4dc9360837b5793a7c322f5b549fdeb1",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
},
{
"docid": "40d8c7f1d24ef74fa34be7e557dca920",
"text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.",
"title": ""
},
{
"docid": "0048b244bd55a724f9bcf4dbf5e551a8",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "eb7582d78766ce274ba899ad2219931f",
"text": "BACKGROUND\nPrecise determination of breast volume facilitates reconstructive procedures and helps in the planning of tissue removal for breast reduction surgery. Various methods currently used to measure breast size are limited by technical drawbacks and unreliable volume determinations. The purpose of this study was to develop a formula to predict breast volume based on straightforward anthropomorphic measurements.\n\n\nMETHODS\nOne hundred one women participated in this study. Eleven anthropomorphic measurements were obtained on 202 breasts. Breast volumes were determined using a water displacement technique. Multiple stepwise linear regression was used to determine predictive variables and a unifying formula.\n\n\nRESULTS\nMean patient age was 37.7 years, with a mean body mass index of 31.8. Mean breast volumes on the right and left sides were 1328 and 1305 cc, respectively (range, 330 to 2600 cc). The final regression model incorporated the variables of breast base circumference in a standing position and a vertical measurement from the inframammary fold to a point representing the projection of the fold onto the anterior surface of the breast. The derived formula showed an adjusted R of 0.89, indicating that almost 90 percent of the variation in breast size was explained by the model.\n\n\nCONCLUSION\nSurgeons may find this formula a practical and relatively accurate method of determining breast volume.",
"title": ""
},
{
"docid": "16cae1a2fe1c42b150b9bca8fd1a3289",
"text": "Monte Carlo Tree Search (MCTS) has produced many recent breakthroughs in game AI research, particularly in computer Go. In this paper we consider how MCTS can be applied to create engaging AI for a popular commercial mobile phone game: Spades by AI Factory, which has been downloaded more than 2.5 million times. In particular, we show how MCTS can be integrated with knowledge-based methods to create an interesting, fun and strong player which makes far fewer plays that could be perceived by human observers as blunders than MCTS without the injection of knowledge. These blunders are particularly noticeable for Spades, where a human player must co-operate with an AI partner. MCTS gives objectively stronger play than the knowledge-based approach used in previous versions of the game and offers the flexibility to customise behaviour whilst maintaining a reusable core, with a reduced development cycle compared to purely knowledge-based techniques. Monte Carlo Tree Search (MCTS) is a family of game tree search algorithms that have advanced the state-of-theart in AI for a variety of challenging games, as surveyed in (Browne et al. 2012). Of particular note is the success of MCTS in the Chinese board game Go (Lee, Müller, and Teytaud 2010). MCTS has many appealing properties for decision making in games. It is an anytime algorithm that can effectively use whatever computation time is available. It also often performs well without any special knowledge or tuning for a particular game, although knowledge can be injected if desired to improve the AI’s strength or modify its playing style. These properties are attractive to a developer of a commercial game, where an AI that is perceived as high quality by players can be developed with significantly less effort than using purely knowledge-based AI methods. This paper presents findings from a collaboration between academic researchers and an independent game development company to integrate MCTS into a highly successful commercial version of the card game Spades for mobile devices running the Android operating system. Most previous work on MCTS uses win rate against a fixed AI opponent as the key metric of success. This is apCopyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. propriate when the aim is to win tournaments or to demonstrate MCTS’s ability to approximate optimal play. However for a commercial game, actual win rate is less important than how engaging the AI is for the players. For example if the AI is generally strong but occasionally makes moves that appear weak to a competent player, then the player’s enjoyment of the game is diminished. This is particularly important for games such as Spades where the player must cooperate with an AI partner whose apparent errors result in losses for the human player. In this paper we combine MCTS with knowledge-based approaches with the goal of creating an AI player that is not only strong in objective terms but is also perceived as strong by players. AI Factory1 is an independent UK-based company, incorporated in April 2003. AI Factory has developed a successful implementation of the popular card game Spades, which to date has been downloaded more than 2.5 million times and has an average review score of 4.5/5 from more than 78 000 reviews on the Google Play store. The knowledge-based AI used in previous versions plays competitively and has been well reviewed by users. This AI was developed using expert knowledge of the game and contains a large number of heuristics developed and tested over a period of 10 years. Much of the decision making is governed by these heuristics which are used to decide bids, infer what cards other players may hold, predict what cards other players may be likely to play and to decide what card to play. In AI Factory Spades, players interact with two AI opponents and one AI partner. Players can select their partners and opponents from a number of AI characters, each with a strength rating from 1 to 5 stars. Gameplay data shows that relatively few players choose intermediate level opponents: occasional or beginning players tend to choose 1-star opponents, whereas those players who play the game most frequently play almost exclusively against 5-star opponents. Presumably these are experienced card game players seeking a challenge. However some have expressed disappointment with the 5-star AI: although strong overall, it occasionally makes apparently bad moves. Our work provides strong evidence for a belief commonly held amongst game developers: the objective measures of strength (such as win rate) often used in the academic study of AI do not nechttp://www.aifactory.co.uk essarily provide a good metric for quality from a commercial AI perspective. The moves chosen by the AI may or may not be suboptimal in a game theoretic sense, but it is clear from player feedback that humans apply some intuition about which moves are good or bad. It is an unsatisfying experience when the AI makes moves which violate this intuition, except possibly where violating this intuition is a correct play, but even then this appears to lead to player dissatisfaction. The primary motivation for this work is to improve the strongest levels of AI play to satisfy experienced players, both in terms of the objective strength of the AI and in how convincing the chosen moves appear. Previous work has adapted MCTS to games which, like Spades, involve hidden information. This has led to the development of the Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms (Cowling, Powley, and Whitehouse 2012). ISMCTS achieves a higher win rate than a knowledge-based AI developed by AI Factory for the Chinese card game Dou Di Zhu, and also performs well in other domains. ISMCTS uses determinizations, randomisations of the current game state which correspond to guessing hidden information. Each determinization is a game state that could conceivably be the actual current state, given the AI player’s observations so far. In Spades, a determinization is generated by randomly distributing the unseen cards amongst the other players. Each ISMCTS iteration is restricted to a newly generated determinization, resulting in a single tree that collects statistics from many determinizations. We demonstrate that the ISMCTS algorithm provides strong levels of play for Spades. However, previous work on ISMCTS has not dealt with the requirements for a commercially viable AI. Consequently, further research and development was needed in order to ensure the AI is perceived to be high quality by users. However, the effort required to inject knowledge into MCTS was small compared to the work needed to develop a heuristic-based AI from scratch. MCTS therefore shows great promise as a reusable basis for AI in commercial games. The ISMCTS player described in this paper is used in the currently available version of AI Factory Spades for the 4and 5-star AI levels, and AI Factory have already begun using the same code and techniques in products under development. This paper is structured as follows. We begin by outlining the rules of Spades and describing the knowledge-based approach used in AI Factory Spades. We then discuss some of the issues encountered in integrating MCTS with an existing mature codebase, and in running MCTS on mobile platforms with limited processor power and memory. We assess our MCTS player in terms of both raw playing strength and player engagement. We conclude with some thoughts on the promise of MCTS for future commercial games.",
"title": ""
}
] | scidocsrr |
a857e42a4a0e2239a01c6dbf6af91f14 | Multi-task , Multi-Kernel Learning for Estimating Individual Wellbeing | [
{
"docid": "c8b1a0d5956ced6deaefe603efc523ba",
"text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.",
"title": ""
},
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] | [
{
"docid": "c77d76834c3aa8ace82cb15b6f882365",
"text": "A multidatabase system provides integrated access to heterogeneous, autonomous local databases in a distributed system. An important problem in current multidatabase systems is identification of semantically similar data in different local databases. The Summary Schemas Model (SSM) is proposed as an extension to multidatabase systems to aid in semantic identification. The SSM uses a global data structure to abstract the information available in a multidatabase system. This abstracted form allows users to use their own terms (imprecise queries) when accessing data rather than being forced to use system-specified terms. The system uses the global data structure to match the user's terms to the semantically closest available system terms. A simulation of the SSM is presented to compare imprecise-query processing with corresponding query-processing costs in a standard multidatabase system. The costs and benefits of the SSM are discussed, and future research directions are presented.",
"title": ""
},
{
"docid": "7021db9b0e77b2df2576f0cc5eda8d7d",
"text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.",
"title": ""
},
{
"docid": "8ad57ca3fa0063033fae25e4bad0a90e",
"text": "The neural network, using an unsupervised generalized Hebbian algorithm (GHA), is adopted to find the principal eigenvectors of a covariance matrix in different kinds of seismograms. We have shown that the extensive computer results of the principal components analysis (PCA) using the neural net of GHA can extract the information of seismic reflection layers and uniform neighboring traces. The analyzed seismic data are the seismic traces with 20-, 25-, and 30-Hz Ricker wavelets, the fault, the reflection and diffraction patterns after normal moveout (NMO) correction, the bright spot pattern, and the real seismogram at Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal can be shown from the projections on the principal eigenvectors. For PCA, a theorem is proposed, which states that adding an extra point along the direction of the existing eigenvector can enhance that eigenvector. The theorem is applied to the interpretation of a fault seismogram and the uniform property of other seismograms. The PCA also provides a significant seismic data compression.",
"title": ""
},
{
"docid": "e8f3dd4d2758da22d54114ec021b56dd",
"text": "Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget k, the rumor blocking problem asks for k seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of (1 − 1/e− δ) by a classic greedy algorithm combined with Monte Carlo simulation with the running time of O(k3 mn ln n/δ2), where n and m are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in O(km ln n/δ2) expected time and provides a (1 − 1/e − δ)-approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor.",
"title": ""
},
{
"docid": "6b6f82399472a6f019c506a549f5ffe6",
"text": "T. Ribot's (1881) law of retrograde amnesia states that brain damage impairs recently formed memories to a greater extent than older memories, which is generally taken to imply that memories need time to consolidate. A. Jost's (1897) law of forgetting states that if 2 memories are of the same strength but different ages, the older will decay more slowly than the younger. The main theoretical implication of this venerable law has never been worked out, but it may be the same as that implied by Ribot's law. A consolidation interpretation of Jost's law implies an interference theory of forgetting that is altogether different from the cue-overload view that has dominated thinking in the field of psychology for decades.",
"title": ""
},
{
"docid": "3ccc5fd5bbf570a361b40afca37cec92",
"text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "d311bfc22c30e860c529b2aeb16b6d40",
"text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.",
"title": ""
},
{
"docid": "8d7cb4e8fd243f3cd091c1866a18fc5c",
"text": "We develop graphene-based devices fabricated by alternating current dielectrophoresis (ac-DEP) for highly sensitive nitric oxide (NO) gas detection. The novel device comprises the sensitive channels of palladium-decorated reduced graphene oxide (Pd-RGO) and the electrodes covered with chemical vapor deposition (CVD)-grown graphene. The highly sensitive, recoverable, and reliable detection of NO gas ranging from 2 to 420 ppb with response time of several hundred seconds has been achieved at room temperature. The facile and scalable route for high performance suggests a promising application of graphene devices toward the human exhaled NO and environmental pollutant detections.",
"title": ""
},
{
"docid": "99381ce7535bb8e654b276c0a4e06432",
"text": "Steganography, coming from the Greek words stegos, meaning roof or covered and graphia which means writing, is the art and science of hiding the fact that communication is taking place. Using steganography, you can embed a secret message inside a piece of unsuspicious information and send it without anyone knowing of the existence of the secret message. Steganography and cryptography are closely related. Cryptography scrambles messages so they cannot be understood. Steganography on the other hand, will hide the message so there is no knowledge of the existence of the message in the first place. In some situations, sending an encrypted message will arouse suspicion while an ”invisible” message wil not do so. Both sciences can be combined to produce better protection of the message. In this case, when the steganography fails and the message can be detected, it is still of no use as it is encrypted using cryptography techniques. Therefore, the principle defined once by Kerckhoffs for cryptography, also stands for steganography: the quality of a cryptographic system should only depend on a small part of information, namely the secret key. The same is valid for good steganographic systems: knowledge of the system that is used, should not give any information about the existence of hidden messages. Finding a message should only be possible with knowledge of the key that is required to uncover it.",
"title": ""
},
{
"docid": "080f76412f283fb236c28678bf9dada8",
"text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.",
"title": ""
},
{
"docid": "d79688b7906c34e7b74a9e93ee3f639e",
"text": "We explore dierent approaches to integrating a simple convolutional neural network (CNN) with the Lucene search engine in a multi-stage ranking architecture. Our models are trained using the PyTorch deep learning toolkit, which is implemented in C/C++ with a Python frontend. One obvious integration strategy is to expose the neural network directly as a service. For this, we use Apache ri, a soware framework for building scalable cross-language services. In exploring alternative architectures, we observe that once trained, the feedforward evaluation of neural networks is quite straightforward. erefore, we can extract the parameters of a trained CNN from PyTorch and import the model into Java, taking advantage of the Java Deeplearning4J library for feedforward evaluation. is has the advantage that the entire end-to-end system can be implemented in Java. As a third approach, we can extract the neural network from PyTorch and “compile” it into a C++ program that exposes a ri service. We evaluate these alternatives in terms of performance (latency and throughput) as well as ease of integration. Experiments show that feedforward evaluation of the convolutional neural network is signicantly slower in Java, while the performance of the compiled C++ network does not consistently beat the PyTorch implementation.",
"title": ""
},
{
"docid": "fb0fa5f3b6d2391495eb1a6a7c63b0fc",
"text": "The demographic change towards an ageing population is introducing significant impact and drastic challenge to our society. We therefore need to find ways to assist older people to stay independently and prevent social isolation of these population. Information and Communication Technologies (ICT) can provide various solutions to help older adults to improve their quality of life, stay healthier, and live independently for longer time. The term of Ambient Assist Living (AAL) becomes a field to investigate innovative technologies to provide assistance as well as healthcare and rehabilitation to senior people with impairment. The paper provides a review of research background and technologies of AAL.",
"title": ""
},
{
"docid": "472605bc322f1fd2c90ad50baf19fffb",
"text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.",
"title": ""
},
{
"docid": "6e67329e4f678ae9dc04395ae0a5b832",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "63cc929e358746526b157ded5ff4b2c8",
"text": "This paper asks how internet use, citizen satisfaction with e-government and citizen trust in government are interrelated. Prior research has found that agencies stress information and service provision on the Web (oneway e-government strategy), but have generally ignore applications that would enhance citizen-government interaction (two-way e-government strategy). Based on a review of the literature, we develop hypotheses about how two facets of e-democracy – transparency and interactivity – may affect citizen trust in government. Using data obtained from the Council on Excellence in Government, we apply a two stage multiple equation model. Findings indicate that internet use is positively associated with transparency satisfaction but negatively associated with interactivity satisfaction, and that both interactivity and transparency are positively associated with citizen trust in government. We conclude that the one-way e-transparency strategy may be insufficient, and that in the future agencies should make and effort to enhance e-interactivity.",
"title": ""
},
{
"docid": "a1c859b44c46ebf4d2d413f4303cb4f7",
"text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.",
"title": ""
},
{
"docid": "8fb10190ba586026ff5235432c438c47",
"text": "This paper presents the various crop yield prediction methods using data mining techniques. Agricultural system is very complex since it deals with large data situation which comes from a number of factors. Crop yield prediction has been a topic of interest for producers, consultants, and agricultural related organizations. In this paper our focus is on the applications of data mining techniques in agricultural field. Different Data Mining techniques such as K-Means, K-Nearest Neighbor(KNN), Artificial Neural Networks(ANN) and Support Vector Machines(SVM) for very recent applications of data mining techniques in agriculture field. Data mining technology has received a great progress with the rapid development of computer science, artificial intelligence. Data Mining is an emerging research field in agriculture crop yield analysis. Data Mining is the process of identifying the hidden patterns from large amount of data. Yield prediction is a very important agricultural problem that remains to be solved based on the available data. The problem of yield prediction can be solved by employing data mining techniques.",
"title": ""
},
{
"docid": "69de2f8098a0618c75baeb259cb94ca1",
"text": "Medicine may stand at the cusp of a mobile transformation. Mobile health, or “mHealth,” is the use of portable devices such as smartphones and tablets for medical purposes, including diagnosis, treatment, or support of general health and well-being. Users can interface with mobile devices through software applications (“apps”) that typically gather input from interactive questionnaires, separate medical devices connected to the mobile device, or functionalities of the device itself, such as its camera, motion sensor, or microphone. Apps may even process these data with the use of medical algorithms or calculators to generate customized diagnoses and treatment recommendations. Mobile devices make it possible to collect more granular patient data than can be collected from devices that are typically used in hospitals or physicians’ offices. The experiences of a single patient can then be measured against large data sets to provide timely recommendations about managing both acute symptoms and chronic conditions.1,2 To give but a few examples: One app allows users who have diabetes to plug glucometers into their iPhones as it tracks insulin doses and sends alerts for abnormally high or low blood sugar levels.3,4 Another app allows patients to use their smartphones to record electrocardiograms,5 using a single lead that snaps to the back of the phone. Users can hold the phone against their chests, record cardiac events, and transmit results to their cardiologists.6 An imaging app allows users to analyze diagnostic images in multiple modalities, including positronemission tomography, computed tomography, magnetic resonance imaging, and ultrasonography.7 An even greater number of mHealth products perform health-management functions, such as medication reminders and symptom checkers, or administrative functions, such as patient scheduling and billing. The volume and variety of mHealth products are already immense and defy any strict taxonomy. More than 97,000 mHealth apps were available as of March 2013, according to one estimate.8 The number of mHealth apps, downloads, and users almost doubles every year.9 Some observers predict that by 2018 there could be 1.7 billion mHealth users worldwide.8 Thus, mHealth technologies could have a profound effect on patient care. However, mHealth has also become a challenge for the Food and Drug Administration (FDA), the regulator responsible for ensuring that medical devices are safe and effective. The FDA’s oversight of mHealth devices has been controversial to members of Congress and industry,10 who worry that “applying a complex regulatory framework could inhibit future growth and innovation in this promising market.”11 But such oversight has become increasingly important. A bewildering array of mHealth products can make it difficult for individual patients or physicians to evaluate their quality or utility. In recent years, a number of bills have been proposed in Congress to change FDA jurisdiction over mHealth products, and in April 2014, a key federal advisory committee laid out its recommendations for regulating mHealth and other health-information technologies.12 With momentum toward legislation building, this article focuses on the public health benefits and risks of mHealth devices under FDA jurisdiction and considers how to best use the FDA’s authority.",
"title": ""
},
{
"docid": "bb8ca605a714d71be903d46bf6e1fa40",
"text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.",
"title": ""
},
{
"docid": "acafc9d077d48511ea351ded56527df9",
"text": "The problem of testing programs without test oracles is well known. A commonly used approach is to use special values in testing but this is often insufficient to ensure program correctness. This paper demonstrates the use of metamorphic testing to uncover faults in programs, which could not be detected by special test values. Metamorphic testing can be used as a complementary test method to special value testing. In this paper, the sine function and a search function are used as examples to demonstrate the usefulness of metamorphic testing. This paper also examines metamorphic relationships and the extent of their usefulness in program testing.",
"title": ""
}
] | scidocsrr |
8890d941123da99a28bbdfe2b12638ca | QoE and power efficiency tradeoff for fog computing networks with fog node cooperation | [
{
"docid": "37be9e992a6a99af165f7c6ddbbed36d",
"text": "The past 15 years have seen the rise of the Cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of “Clouds:” (1) data center, (2) backbone IP network and (3) cellular core network, responsible for computation, storage, communication and network management. Now the functions of these three types of Clouds are “descending” to be among or near the end users, i.e., to the edge of networks, as “Fog.”",
"title": ""
},
{
"docid": "ae19bd4334434cfb8c5ac015dc8d3bd4",
"text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.",
"title": ""
},
{
"docid": "9e4417a0ea21de3ffffb9017f0bad705",
"text": "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.",
"title": ""
}
] | [
{
"docid": "0a7558a172509707b33fcdfaafe0b732",
"text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.",
"title": ""
},
{
"docid": "4bd161b3e91dea05b728a72ade72e106",
"text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: [email protected] and [email protected]",
"title": ""
},
{
"docid": "84d2cb7c4b8e0f835dab1cd3971b60c5",
"text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.",
"title": ""
},
{
"docid": "88128ec1201e2202f13f2c09da0f07f2",
"text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: [email protected]. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .",
"title": ""
},
{
"docid": "7161122eaa9c9766e9914ba0f2ee66ef",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "b741698d7e4d15cb7f4e203f2ddbce1d",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "f35007fdca9c35b4c243cb58bd6ede7a",
"text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).",
"title": ""
},
{
"docid": "957170b015e5acd4ab7ce076f5a4c900",
"text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.",
"title": ""
},
{
"docid": "d30343a3a888139eb239c6605ccb0f41",
"text": "Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.",
"title": ""
},
{
"docid": "70b325c1767e9977ac27894cfa051fab",
"text": "BACKGROUND\nDecreased systolic function is central to the pathogenesis of heart failure in millions of patients worldwide, but mechanism-related adverse effects restrict existing inotropic treatments. This study tested the hypothesis that omecamtiv mecarbil, a selective cardiac myosin activator, will augment cardiac function in human beings.\n\n\nMETHODS\nIn this dose-escalating, crossover study, 34 healthy men received a 6-h double-blind intravenous infusion of omecamtiv mecarbil or placebo once a week for 4 weeks. Each sequence consisted of three ascending omecamtiv mecarbil doses (ranging from 0·005 to 1·0 mg/kg per h) with a placebo infusion randomised into the sequence. Vital signs, blood samples, electrocardiographs (ECGs), and echocardiograms were obtained before, during, and after each infusion. The primary aim was to establish maximum tolerated dose (the highest infusion rate tolerated by at least eight participants) and plasma concentrations of omecamtiv mecarbil; secondary aims were evaluation of pharmacodynamic and pharmacokinetic characteristics, safety, and tolerability. This study is registered at ClinicalTrials.gov, number NCT01380223.\n\n\nFINDINGS\nThe maximum tolerated dose of omecamtiv mecarbil was 0·5 mg/kg per h. Omecamtiv mecarbil infusion resulted in dose-related and concentration-related increases in systolic ejection time (mean increase from baseline at maximum tolerated dose, 85 [SD 5] ms), the most sensitive indicator of drug effect (r(2)=0·99 by dose), associated with increases in stroke volume (15 [2] mL), fractional shortening (8% [1]), and ejection fraction (7% [1]; all p<0·0001). Omecamtiv mecarbil increased atrial contractile function, and there were no clinically relevant changes in diastolic function. There were no clinically significant dose-related adverse effects on vital signs, serum chemistries, ECGs, or adverse events up to a dose of 0·625 mg/kg per h. The dose-limiting toxic effect was myocardial ischaemia due to excessive prolongation of systolic ejection time.\n\n\nINTERPRETATION\nThese first-in-man data show highly dose-dependent augmentation of left ventricular systolic function in response to omecamtiv mecarbil and support potential clinical use of the drug in patients with heart failure.\n\n\nFUNDING\nCytokinetics Inc.",
"title": ""
},
{
"docid": "b5ecd3e4e14cae137b88de8bd4c92c5d",
"text": "Design and analysis of ultrahigh-frequency (UHF) micropower rectifiers based on a diode-connected dynamic threshold MOSFET (DTMOST) is discussed. An analytical design model for DTMOST rectifiers is derived based on curve-fitted diode equation parameters. Several DTMOST six-stage charge-pump rectifiers were designed and fabricated using a CMOS 0.18-mum process with deep n-well isolation. Measured results verified the design model with average accuracy of 10.85% for an input power level between -4 and 0 dBm. At the same time, three other rectifiers based on various types of transistors were fabricated on the same chip. The measured results are compared with a Schottky diode solution.",
"title": ""
},
{
"docid": "bde70da078bba2a63899cc7eb2a9aaf9",
"text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.",
"title": ""
},
{
"docid": "6883add239f58223ef1941d5044d4aa8",
"text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.",
"title": ""
},
{
"docid": "ba9030da218e0ba5d4369758d80be5b9",
"text": "Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.",
"title": ""
},
{
"docid": "5cfef434d0d33ac5859bcdb77227d7b7",
"text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.",
"title": ""
},
{
"docid": "16546193b0096392d4f5ebf6ad7d35a8",
"text": "According to the ways to see the real environments, mirror metaphor augmented reality systems can be classified into video see-through virtual mirror displays and reflective half-mirror displays. The two systems have distinctive characteristics and application fields with different types of complexity. In this paper, we introduce a system configuration to implement a prototype of a reflective half-mirror display-based augmented reality system. We also present a two-phase calibration method using an extra camera for the system. Finally, we describe three error sources in the proposed system and show the result of analysis of these errors with several experiments.",
"title": ""
},
{
"docid": "bbea93884f1f0189be1061939783a1c0",
"text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.",
"title": ""
},
{
"docid": "cac556bfbdf64e655766da2404cb24c2",
"text": "How can we learn a classier that is “fair” for a protected or sensitive group, when we do not know if the input to the classier belongs to the protected group? How can we train such a classier when data on the protected group is dicult to aain? In many settings, nding out the sensitive input aribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we oen do not know many aributes of the user, e.g., race or age, and many aributes of the content are hard to determine, e.g., the language or topic. us, it is not feasible to use a dierent classier calibrated based on knowledge of the sensitive aribute. Here, we use an adversarial training procedure to remove information about the sensitive aribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training eects the resulting fairness properties. We nd two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary’s notion of fairness. ACM Reference format: Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. 2017. Data Decisions and eoretical Implications when Adversarially Learning Fair Representations. In Proceedings of 2017Workshop on Fairness, Accountability, and Transparency in Machine Learning, Halifax, Canada, August 2017 (FAT/ML ’17), 5 pages.",
"title": ""
}
] | scidocsrr |
87f1dfeed6c0a652ff01913779db2d48 | RECENT ADVANCES IN PERSONAL RECOMMENDER SYSTEMS | [
{
"docid": "21756eeb425854184ba2ea722a935928",
"text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.",
"title": ""
}
] | [
{
"docid": "5857805620b43cafa7a18461dfb74363",
"text": "In this paper, we give an overview for the shared task at the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese word segmentation for micro-blog texts. Different with the popular used newswire datasets, the dataset of this shared task consists of the relatively informal micro-texts. Besides, we also use a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty. The data and evaluation codes can be downloaded from https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo.",
"title": ""
},
{
"docid": "f0958d2c952c7140c998fa13a2bf4374",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "1af7a41e5cac72ed9245b435c463b366",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
},
{
"docid": "89357509bc9b4937f85ed1c1b028cc00",
"text": "Rotator cuff disorders are considered to be among the most common causes of shoulder pain and disability encountered in both primary and secondary care. The general pathology of subacromial impingment generally relates to a chronic repetitive process in which the conjoint tendon of the rotator cuff undergoes repetitive compression and micro trauma as it passes under the coracoacromial arch. However acute traumatic injuries may also lead to this condition. Diagnosis remains a clinical one, however advances in imaging modalities have enabled clinicians to have an increased understanding of the pathological process. Ultrasound scanning appears to be a justifiable and cost effective assessment tool following plain radiographs in the assessment of shoulder impingment, with MRI scans being reserved for more complex cases. A period of observed conservative management including the use of NSAIDs, physiotherapy with or without the use of subacromial steroid injections is a well-established and accepted practice. However, in young patients or following any traumatic injury to the rotator cuff, surgery should be considered early. If surgery is to be performed this should be done arthroscopically and in the case of complete rotator cuff rupture the tendon should be repaired where possible.",
"title": ""
},
{
"docid": "e33b3ebfc46c371253cf7f68adbbe074",
"text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.",
"title": ""
},
{
"docid": "ee4dbe3dc0352a60c61ec8d36ebda56d",
"text": "This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.",
"title": ""
},
{
"docid": "f7e4c0300f1483883956be3cb5ccc174",
"text": "Despite of the fact that graph-based methods are gaining more and more popularity in different scientific areas, it has to be considered that the choice of an appropriate algorithm for a given application is still the most crucial task. The lack of a large database of graphs makes the task of comparing the performance of different graph matching algorithms difficult, and often the selection of an algorithm is made on the basis of a few experimental results available. In this paper we present an experimental comparative evaluation of the performance of four graph matching algorithms. In order to perform this comparison, we have built and made available a large database of graphs, which is also described in detail in this article. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "bdf3afc900c92867c2af9fccabe27451",
"text": "In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.",
"title": ""
},
{
"docid": "02d8a6c039c3ab37e78160c7a9831714",
"text": "In this paper we present the design, fabrication and demonstration of an X-band phased array capable of wide-angle scanning. A new non-symmetric element for wideband tightly coupled dipole arrays is integrated with a low-profile microstrip balun printed on the array ground plane. The feed connects to the array aperture with vertical twin-wire transmission lines that concurrently perform impedance matching. The proposed element arms are identical near the center feed portion but dissimilar towards the ends, forming a ball-and-cup. A 64 element array prototype is verified experimentally and compared to numerical simulation. The array aperture is placed λ/7 (at 8 GHz) above a ground plane and shown to maintain a VSWR < 2 from 8–12.5 GHz while scanning up to 75° and 60° in E and H-plane, respectively.",
"title": ""
},
{
"docid": "72b3fbd8c7f03a4ad1e36ceb5418cba6",
"text": "The risk for multifactorial diseases is determined by risk factors that frequently apply across disorders (universal risk factors). To investigate unresolved issues on etiology of and individual’s susceptibility to multifactorial diseases, research focus should shift from single determinant-outcome relations to effect modification of universal risk factors. We present a model to investigate universal risk factors of multifactorial diseases, based on a single risk factor, a single outcome measure, and several effect modifiers. Outcome measures can be disease overriding, such as clustering of disease, frailty and quality of life. “Life course epidemiology” can be considered as a specific application of the proposed model, since risk factors and effect modifiers of multifactorial diseases typically have a chronic aspect. Risk factors are categorized into genetic, environmental, or complex factors, the latter resulting from interactions between (multiple) genetic and environmental factors (an example of a complex factor is overweight). The proposed research model of multifactorial diseases assumes that determinant-outcome relations differ between individuals because of modifiers, which can be divided into three categories. First, risk-factor modifiers that determine the effect of the determinant (such as factors that modify gene-expression in case of a genetic determinant). Second, outcome modifiers that determine the expression of the studied outcome (such as medication use). Third, generic modifiers that determine the susceptibility for multifactorial diseases (such as age). A study to assess disease risk during life requires phenotype and outcome measurements in multiple generations with a long-term follow up. Multiple generations will also enable to separate genetic and environmental factors. Traditionally, representative individuals (probands) and their first-degree relatives have been included in this type of research. We put forward that a three-generation design is the optimal approach to investigate multifactorial diseases. This design has statistical advantages (precision, multiple-informants, separation of non-genetic and genetic familial transmission, direct haplotype assessment, quantify genetic effects), enables unique possibilities to study social characteristics (socioeconomic mobility, partner preferences, between-generation similarities), and offers practical benefits (efficiency, lower non-response). LifeLines is a study based on these concepts. It will be carried out in a representative sample of 165,000 participants from the northern provinces of the Netherlands. LifeLines will contribute to the understanding of how universal risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline.",
"title": ""
},
{
"docid": "e12ac0716b29f35fff1ec51b1abb6326",
"text": "In my commentary in response to the 3 articles (McKenzie & Lounsbery, 2013; Rink, 2013; Ward, 2013), I focus on 3 areas: (a) content knowledge, (b) a holistic approach to physical education, and (c) policy impact. I use the term quality teaching rather than \"teacher effectiveness.\" Quality teaching is a term with the potential to move our attention beyond a focus merely on issues of effectiveness relating to the achievement of prespecified objectives. I agree with Ward that teacher content knowledge is limited in physical education, and I argue that if the student does not have a connection to or relationship with the content, this will diminish their learning gains. I also argue for a more holistic approach to physical education coming from a broader conception. Physical educators who teach the whole child advocate for a plethora of physical activity, skills, knowledge, and positive attitudes that foster healthy and active playful lifestyles. Play is a valuable educational experience. I also endorse viewing assessment from different perspectives and discuss assessment through a social-critical political lens. The 3 articles also have implications for policy. Physical education is much broader than just physical activity, and we harm the future potential of our field if we adopt a narrow agenda. Looking to the future, I propose that we broaden the kinds of research that we value, support, and appreciate in our field.",
"title": ""
},
{
"docid": "77b4be1fb0b87eb1ee0399c073a7b78f",
"text": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
"title": ""
},
{
"docid": "85b72dedb0c874fcfbb71c1d6f9fce42",
"text": "In this paper, we present an optimization of Odlyzko and Schönhage algorithm that computes efficiently Zeta function at large height on the critical line, together with computation of zeros of the Riemann Zeta function thanks to an implementation of this technique. The first family of computations consists in the verification of the Riemann Hypothesis on all the first 10 non trivial zeros. The second family of computations consists in verifying the Riemann Hypothesis at very large height for different height, while collecting statistics in these zones. For example, we were able to compute two billion zeros from the 10-th zero of the Riemann Zeta function.",
"title": ""
},
{
"docid": "ff6b4840787027df75873f38fbb311b4",
"text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.",
"title": ""
},
{
"docid": "e5a18d6df921ab96da8e106cdb4eeac7",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "c00e78121637ee9bcf1640c41204afd0",
"text": "In this paper we present a methodology for analyzing polyphonic musical passages comprised by notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.",
"title": ""
},
{
"docid": "efffd36e611546d2da975f8a182fb5a5",
"text": "Annona muricata is a member of the Annonaceae family and is a fruit tree with a long history of traditional use. A. muricata, also known as soursop, graviola and guanabana, is an evergreen plant that is mostly distributed in tropical and subtropical regions of the world. The fruits of A. muricata are extensively used to prepare syrups, candies, beverages, ice creams and shakes. A wide array of ethnomedicinal activities is contributed to different parts of A. muricata, and indigenous communities in Africa and South America extensively use this plant in their folk medicine. Numerous investigations have substantiated these activities, including anticancer, anticonvulsant, anti-arthritic, antiparasitic, antimalarial, hepatoprotective and antidiabetic activities. Phytochemical studies reveal that annonaceous acetogenins are the major constituents of A. muricata. More than 100 annonaceous acetogenins have been isolated from leaves, barks, seeds, roots and fruits of A. muricata. In view of the immense studies on A. muricata, this review strives to unite available information regarding its phytochemistry, traditional uses and biological activities.",
"title": ""
},
{
"docid": "f22375b6d29a83815aedd999cb945027",
"text": "INTRODUCTION\nNumerous methods for motor unit number estimation (MUNE) have been developed. The objective of this article is to summarize and compare the major methods and the available data regarding their reproducibility, validity, application, refinement, and utility.\n\n\nMETHODS\nUsing specified search criteria, a systematic review of the literature was performed. Reproducibility, normative data, application to specific diseases and conditions, technical refinements, and practicality were compiled into a comprehensive database and analyzed.\n\n\nRESULTS\nThe most commonly reported MUNE methods are the incremental, multiple-point stimulation, spike-triggered averaging, and statistical methods. All have established normative data sets and high reproducibility. MUNE provides quantitative assessments of motor neuron loss and has been applied successfully to the study of many clinical conditions, including amyotrophic lateral sclerosis and normal aging.\n\n\nCONCLUSIONS\nMUNE is an important research technique in human subjects, providing important data regarding motor unit populations and motor unit loss over time.",
"title": ""
},
{
"docid": "b7f21081cfd7c87cfce191978ecc218a",
"text": "In less than half a century, molecular markers have totally changed our view of nature, and in the process they have evolved themselves. However, all of the molecular methods developed over the years to detect variation do so in one of only three conceptually different classes of marker: protein variants (allozymes), DNA sequence polymorphism and DNA repeat variation. The latest techniques promise to provide cheap, high-throughput methods for genotyping existing markers, but might other traditional approaches offer better value for some applications?",
"title": ""
}
] | scidocsrr |
e87fa1711329d3b3f0a6b56ad4080445 | IR-UWB Radar Demonstrator for Ultra-Fine Movement Detection and Vital-Sign Monitoring | [
{
"docid": "45f27e9c768e6fa0a1f4aa63532827ff",
"text": "Antennas are mandatory system components for UWB communication systems. The paper presents a comprehensive approach for the characterization of UWB antenna concepts. Measurements of the transient responses of a LPDA and a Vivaldi antenna prove the effectivity of the presented model.",
"title": ""
}
] | [
{
"docid": "9af2a00a9a059a87a188d351f7de4904",
"text": "The cities of Paris, London, Chicago, and New York (among others) have recently launched large-scale bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the relationship between aspects of bike-share system design and ridership. Specifically, we estimate the effects on ridership of station accessibility (how far the commuter must walk to reach a station) and of bike-availability (the likelihood of finding a bike at the station). Our analysis is based on a structural demand model that considers the random-utility maximizing choices of spatially distributed commuters, and it is estimated using highfrequency system-use data from the bike-share system in Paris. The role of station accessibility is identified using cross-sectional variation in station location and high -frequency changes in commuter choice sets; bike-availability effects are identified using longitudinal variation. Because the scale of our data, (in particular the high-frequency changes in choice sets) render traditional numerical estimation techniques infeasible, we develop a novel transformation of our estimation problem: from the time domain to the “station stockout state” domain. We find that a 10% reduction in distance traveled to access bike-share stations (about 13 meters) can increase system-use by 6.7% and that a 10% increase in bikeavailability can increase system-use by nearly 12%. Finally, we use our estimates to develop a calibrated counterfactual simulation demonstrating that the bike-share system in central Paris would have 29.41% more ridership if its station network design had incorporated our estimates of commuter preferences—with no additional spending on bikes or docking points.",
"title": ""
},
{
"docid": "e8b4f006d0d8bc1fb504ae4268d6f3ac",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/fall2014/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) If you are an on-campus (non-SCPD) student, please print, fill out, and include a copy of the cover sheet (enclosed as the final page of this document), and include the cover sheet as the first page of your submission. as a single PDF file under 20MB in size. If you have trouble submitting online, you can also email your submission to [email protected]. However, we strongly recommend using the website submission method as it will provide confirmation of submission, and also allow us to track and return your graded homework to you more easily. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.",
"title": ""
},
{
"docid": "b74922324e4b0e67092b3303068c8794",
"text": "Data mining techniques are used to extract useful knowledge from raw data. The extracted knowledge is valuable and significantly affects the decision maker. Educational data mining (EDM) is a method for extracting useful information that could potentially affect an organization. The increase of technology use in educational systems has led to the storage of large amounts of student data, which makes it important to use EDM to improve teaching and learning processes. EDM is useful in many different areas including identifying at-risk students, identifying priority learning needs for different groups of students, increasing graduation rates, effectively assessing institutional performance, maximizing campus resources, and optimizing subject curriculum renewal. This paper surveys the relevant studies in the EDM field and includes the data and methodologies used in those studies.",
"title": ""
},
{
"docid": "43a4fe61a35c1c34335ac4d1f86ebea3",
"text": "The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variables. For solving this separable convex minimization model, it is usually required to decompose the ALM subproblem at each iteration into m smaller subproblems, each of which only involves one function in the original objective. Easier subproblems capable of taking full advantage of the functions’ properties individually could thus be generated. In this paper, we focus on the case where full Jacobian decomposition is applied to ALM subproblems, i.e., all the decomposed ALM subproblems are eligible for parallel computation at each iteration. For the first time, we show by an example that the ALM with full Jacobian decomposition could be divergent. To guarantee the convergence, we suggest combining an under-relaxation step and the output of the ALM with full Jacobian decomposition. A novel analysis is presented to illustrate how to choose refined step sizes for this under-relaxation step. Accordingly, a new splitting version of the ALM with full Jacobian decomposition is proposed. We derive the worst-case O(1/k) convergence rate measured by the iteration complexity (where k represents the iteration counter) in both the ergodic and a nonergodic senses for the new algorithm. Finally, an assignment problem is tested to illustrate the efficiency of the new algorithm.",
"title": ""
},
{
"docid": "e181f73c36c1d8c9463ef34da29d9e03",
"text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "77aea5cc0a74546f5c8fef1dd39770bc",
"text": "Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle (UAV)-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a threedimensional (3D) surface model over a road distress area for distress measurement. The system consists of a lowcost model helicopter equipped with a digital camera, a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS), and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites ∗To whom correspondence should be addressed. E-mail: chunsunz@ unimelb.edu.au. with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.",
"title": ""
},
{
"docid": "92fab94ccaf9495fed86eb456602b3b4",
"text": "We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.",
"title": ""
},
{
"docid": "921b4ecaed69d7396285909bd53a3790",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "c8009d5823d7af91dc9b56a4d19eed27",
"text": "Built to Last's answer is to consciously build a compmy with even more care than the hotels, airplanes, or computers from which the company earns revenue. Building a company requires much more than hiring smart employees and aggressive salespeople. Visionary companies consider the personality of their potential employees and how they will fare in the company culture. They treasure employees dedicated to the company's mission, while those that don't are \" ejected like a virus. \" They carefully choose goals and develop cultures that encourage innovation and experimentation. Visionary companies plan for the future, measure their current production, and revise plans when conditions change. Much like the TV show Biography, Built to Last gives fascinating historical insight into the birth and growth of The most radical of the three books I reviewed, The Fifth Discipline, can fundamentally change the way you view the world. The Flremise is that businesses, schools, gopernments, and other organizations can best succeed if they are learning organizations. The Fifth Discipline is Peter Senge's vehicle for explaining how five complementary components-systems thinking, personal mastery, mental models, shared vision, and team learning-can support continuous learning and therefore sustainable iniprovement. Senge, a professor a t MIT's Sloan School of Government and a director of the Society for Organizational Learning, looks beyont: simple cause-and-effect explanation:j and instead advocates \" systems thinking \" to discover a more complete understanding of how and why events occur. Systems thinkers go beyond the data readily available, question assumptions, and try to identify the many types of activities that can occur simultaneously. The need for such a worldview is made clear early in the book with the role-playing \" beer game. \" In this game, three participants play the roles of store manager, beverage distributor, and beer brewer. Each has information that would typically he available: the store manager knows how many cases of beer are in inventory , how many are on order, and how many were sold in the last week. The distributor tracks the orders placed with the brewery, inventory, orders received this week from each store, and so on. As the customers' demands vary, the manager, distributor, and brewer make what seem to be reasonable decisions to change the amount they order or brew. Thousands of people have played this and, unfortunately, the results are extremely consistent. As each player tries to maximize profits, each fails to consider how his …",
"title": ""
},
{
"docid": "b61985ecdb51982e6e31b19c862f18e2",
"text": "Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges. One main reason is because GPS has limited precision in indoor environments. The additional fact that MAVs are not able to carry heavy weight or power consuming sensors, such as range finders, makes indoor autonomous navigation a challenging task. In this paper, we propose a practical system in which a quadcopter autonomously navigates indoors and finds a specific target, i.e. a book bag, by using a single camera. A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot’s choice of action. We show our system’s performance through real-time experiments in diverse indoor locations. To understand more about our trained network, we use several visualization techniques.",
"title": ""
},
{
"docid": "09adc565d4a36f396ccd0e1dcb046df0",
"text": "We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits.",
"title": ""
},
{
"docid": "41d5b01cf6f731db0752af0953395327",
"text": "Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being “too linear” (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing; linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.",
"title": ""
},
{
"docid": "b82b46fc0d886e3e87b757a6ca14d4bb",
"text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.",
"title": ""
},
{
"docid": "d380a5de56265c80309733370c612316",
"text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.",
"title": ""
},
{
"docid": "e65ec1afef79e5c85b6fa2009c7ecd95",
"text": "Popular domain adaptation (DA) techniques learn a classifier for the target domain by sampling relevant data points from the source and combining it with the target data. We present a Support Vector Machine (SVM) based supervised DA technique, where the similarity between source and target domains is modeled as the similarity between their SVM decision boundaries. We couple the source and target SVMs and reduce the model to a standard single SVM. We test the Coupled-SVM on multiple datasets and compare our results with other popular SVM based DA approaches.",
"title": ""
},
{
"docid": "f11aa75465f087bcd059e2af1dc963d4",
"text": "The process of translation is ambiguous, in that there are typically many valid translations for a given sentence. This gives rise to significant variation in parallel corpora, however, most current models of machine translation do not account for this variation, instead treating the problem as a deterministic process. To this end, we present a deep generative model of machine translation which incorporates a chain of latent variables, in order to account for local lexical and syntactic variation in parallel corpora. We provide an indepth analysis of the pitfalls encountered in variational inference for training deep generative models. Experiments on several different language pairs demonstrate that the model consistently improves over strong baselines.",
"title": ""
},
{
"docid": "0f421a4ee46535f01390e04fa24b5502",
"text": "Wireless sensor networks (WSNs) are autonomous networks of spatially distributed sensor nodes that are capable of wirelessly communicating with each other in a multihop fashion. Among different metrics, network lifetime and utility, and energy consumption in terms of carbon footprint are key parameters that determine the performance of such a network and entail a sophisticated design at different abstraction levels. In this paper, wireless energy harvesting (WEH), wake-up radio (WUR) scheme, and error control coding (ECC) are investigated as enabling solutions to enhance the performance of WSNs while reducing its carbon footprint. Specifically, a utility-lifetime maximization problem incorporating WEH, WUR, and ECC, is formulated and solved using distributed dual subgradient algorithm based on the Lagrange multiplier method. Discussion and verification through simulation results show how the proposed solutions improve network utility, prolong the lifetime, and pave the way for a greener WSN by reducing its carbon footprint.",
"title": ""
},
{
"docid": "3a1419469eb2c04dee78e3b7d46d1a18",
"text": "c∈T ∑ u∈Sc log fu,c(X), where Sc – set of locations, which were identified as a class c ∈ C by the weak localization procedure. 2 Expansion principle • Expansion loss incorporates a prior knowledge about object sizes. • The characteristic size of any class c is controlled by a decay parameter dc. • We use decay d+ for all classes, which present in the image, and decay d− for all classes, which are absent. I = {i1, . . . , in} defines descending order for class scores: fi1,c(x) ≥ · · · ≥ fin,c(x) Gc(f(X);dc) = 1 Z(dc) n ∑",
"title": ""
},
{
"docid": "277919545c003c0c2a266ace0d70de03",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "754108343e8a57852d4a54abf45f5c43",
"text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.",
"title": ""
}
] | scidocsrr |
eb1a80981b9b86b523dda13cfc2d674d | Japanese Society for Cancer of the Colon and Rectum (JSCCR) Guidelines 2014 for treatment of colorectal cancer | [
{
"docid": "b966af7f15e104865944ac44fad23afc",
"text": "Five cases are described where minute foci of adenocarcinoma have been demonstrated in the mesorectum several centimetres distal to the apparent lower edge of a rectal cancer. In 2 of these there was no other evidence of lymphatic spread of the tumour. In orthodox anterior resection much of this tissue remains in the pelvis, and its is suggested that these foci might lead to suture-line or pelvic recurrence. Total excision of the mesorectum has, therefore, been carried out as a part of over 100 consecutive anterior resections. Fifty of these, which were classified as 'curative' or 'conceivably curative' operations, have now been followed for over 2 years with no pelvic or staple-line recurrence.",
"title": ""
},
{
"docid": "bc4a72d96daf03f861b187fa73f57ff6",
"text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.",
"title": ""
}
] | [
{
"docid": "29c8c8abf86b2d7358a1cd70751f3f93",
"text": "Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.",
"title": ""
},
{
"docid": "c4183c8b08da8d502d84a650d804cac8",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "381d42fca0f242c10d115113c7a33c67",
"text": "Abstract. We present a detailed workload characterization of a multi-tiered system that hosts an e-commerce site. Using the TPC-W workload and via experimental measurements, we illustrate how workload characteristics affect system behavior and operation, focusing on the statistical properties of dynamic page generation. This analysis allows to identify bottlenecks and the system conditions under which there is degradation in performance. Consistent with the literature, we find that the distribution of the dynamic page generation is heavy-tailed, which is caused by the interaction of the database server with the storage system. Furthermore, by examining the queuing behavior at the database server, we present experimental evidence of the existence of statistical correlation in the distribution of dynamic page generation times, especially under high load conditions. We couple this observation with the existence (and switching) of bottlenecks in the system.",
"title": ""
},
{
"docid": "dcc10f93667d23ed3af321086114f261",
"text": "Background: Silver nanoparticles (SNPs) are used extensively in areas such as medicine, catalysis, electronics, environmental science, and biotechnology. Therefore, facile synthesis of SNPs from an eco-friendly, inexpensive source is a prerequisite. In the present study, fabrication of SNPs from the leaf extract of Butea monosperma (Flame of Forest) has been performed. SNPs were synthesized from 1% leaf extract solution and characterized by ultraviolet-visible (UV-vis) spectroscopy and transmission electron microscopy (TEM). The mechanism of SNP formation was studied by Fourier transform infrared (FTIR), and anti-algal properties of SNPs on selected toxic cyanobacteria were evaluated. Results: TEM analysis indicated that size distribution of SNPs was under 5 to 30 nm. FTIR analysis indicated the role of amide I and II linkages present in protein in the reduction of silver ions. SNPs showed potent anti-algal properties on two cyanobacteria, namely, Anabaena spp. and Cylindrospermum spp. At a concentration of 800 μg/ml of SNPs, maximum anti-algal activity was observed in both cyanobacteria. Conclusions: This study clearly demonstrates that small-sized, stable SNPs can be synthesized from the leaf extract of B. monosperma. SNPs can be effectively employed for removal of toxic cyanobacteria.",
"title": ""
},
{
"docid": "9d33565dbd5148730094a165bb2e968f",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "ba2cc10384c8be27ca0251c574998a1b",
"text": "As the extension of Distributed Denial-of-Service (DDoS) attacks to application layer in recent years, researchers pay much interest in these new variants due to a low-volume and intermittent pattern with a higher level of stealthiness, invaliding the state-of-the-art DDoS detection/defense mechanisms. We describe a new type of low-volume application layer DDoS attack--Tail Attacks on Web Applications. Such attack exploits a newly identified system vulnerability of n-tier web applications (millibottlenecks with sub-second duration and resource contention with strong dependencies among distributed nodes) with the goal of causing the long-tail latency problem of the target web application (e.g., 95th percentile response time > 1 second) and damaging the long-term business of the service provider, while all the system resources are far from saturation, making it difficult to trace the cause of performance degradation.\n We present a modified queueing network model to analyze the impact of our attacks in n-tier architecture systems, and numerically solve the optimal attack parameters. We adopt a feedback control-theoretic (e.g., Kalman filter) framework that allows attackers to fit the dynamics of background requests or system state by dynamically adjusting attack parameters. To evaluate the practicality of such attacks, we conduct extensive validation through not only analytical, numerical, and simulation results but also real cloud production setting experiments via a representative benchmark website equipped with state-of-the-art DDoS defense tools. We further proposed a solution to detect and defense the proposed attacks, involving three stages: fine-grained monitoring, identifying bursts, and blocking bots.",
"title": ""
},
{
"docid": "bf7b3cdb178fd1969257f56c0770b30b",
"text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.",
"title": ""
},
{
"docid": "e50d156bde3479c27119231073705f70",
"text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.",
"title": ""
},
{
"docid": "112f7444f0881bf940d056a96c6f5ee3",
"text": "This paper describes our approach on “Information Extraction from Microblogs Posted during Disasters”as an attempt in the shared task of the Microblog Track at Forum for Information Retrieval Evaluation (FIRE) 2016 [2]. Our method uses vector space word embeddings to extract information from microblogs (tweets) related to disaster scenarios, and can be replicated across various domains. The system, which shows encouraging performance, was evaluated on the Twitter dataset provided by the FIRE 2016 shared task. CCS Concepts •Computing methodologies→Natural language processing; Information extraction;",
"title": ""
},
{
"docid": "a9242c3fca5a8ffdf0e03776b8165074",
"text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.",
"title": ""
},
{
"docid": "237a88ea092d56c6511bb84604e6a7c7",
"text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.",
"title": ""
},
{
"docid": "5350ffea7a4187f0df11fd71562aba43",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "7d9162b079a155f48688a1d70af5482a",
"text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.",
"title": ""
},
{
"docid": "867c8c0286c0fed4779f550f7483770d",
"text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.",
"title": ""
},
{
"docid": "348c62670a729da42654f0cf685bba53",
"text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.",
"title": ""
},
{
"docid": "1a99b71b6c3c33d97c235a4d72013034",
"text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas",
"title": ""
},
{
"docid": "26052ad31f5ccf55398d6fe3b9850674",
"text": "An electroneurographic study performed on the peripheral nerves of 25 patients with severe cirrhosis following viral hepatitis showed slight slowing (P > 0.05) of motor conduction velocity (CV) and significant diminution (P < 0.001) of sensory CV and mixed sensorimotor-evoked potentials, associated with a significant decrease in the amplitude of sensory evoked potentials. The slowing was about equal in the distal (digital) and in the proximal segments of the same nerve. A mixed axonal degeneration and segmental demyelination is presumed to explain these findings. The CV measurements proved helpful for an early diagnosis of hepatic polyneuropathy showing subjective symptoms in the subclinical stage. Elektroneurographische Untersuchungen der peripheren Nerven bei 25 Patienten mit postviralen Leberzirrhosen ergaben folgendes: geringe Verminderung (P > 0.05) der motorischen Leitgeschwindigkeit (LG) und eine signifikant verlangsamte LG in sensiblen Fasern (P < 0.001), in beiden proximalen und distalen Fasern. Es wurde in den gemischten evozierten Potentialen eine Verlangsamung der LG festgestellt, zwischen den Werten der motorischen und sensiblen Fasern. Gleichzeitig wurde eine Minderung der Amplitude des NAP beobachtet. Diese Befunde sprechen für eine axonale Degeneration und eine Demyelinisierung in den meisten untersuchten peripheren Nerven. Elektroneurographische Untersuchungen erlaubten den funktionellen Zustand des peripheren Nervens abzuschätzen und bestimmte Veränderungen bereits im Initialstadium der Erkrankung aufzudecken, wenn der Patient noch keine klinischen Zeichen einer peripheren Neuropathie bietet.",
"title": ""
},
{
"docid": "709aa1bc4ace514e46f7edbb07fb03a9",
"text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.",
"title": ""
},
{
"docid": "8eb0f822b4e8288a6b78abf0bf3aecbb",
"text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.",
"title": ""
},
{
"docid": "9e6bfc7b5cc87f687a699c62da013083",
"text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.",
"title": ""
}
] | scidocsrr |
c14512660c09c02d1faa4b6688ef42f5 | Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks | [
{
"docid": "ffeb8ab86966a7ac9b8c66bdec7bfc32",
"text": "Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing–dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.",
"title": ""
}
] | [
{
"docid": "675795d2799838f72898afcfcbd77370",
"text": "Data-driven techniques for interactive narrative generation are the subject of growing interest. Reinforcement learning (RL) offers significant potential for devising data-driven interactive narrative generators that tailor players’ story experiences by inducing policies from player interaction logs. A key open question in RL-based interactive narrative generation is how to model complex player interaction patterns to learn effective policies. In this paper we present a deep RL-based interactive narrative generation framework that leverages synthetic data produced by a bipartite simulated player model. Specifically, the framework involves training a set of Q-networks to control adaptable narrative event sequences with long short-term memory network-based simulated players. We investigate the deep RL framework’s performance with an educational interactive narrative, CRYSTAL ISLAND. Results suggest that the deep RL-based narrative generation framework yields effective personalized interactive narratives.",
"title": ""
},
{
"docid": "537cf2257d1ca9ef49f023dbdc109e0b",
"text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.07.006 * Corresponding author. Tel.: +886 3 5712121x573 E-mail addresses: [email protected] (Y.-S (L.-I. Tong). The autoregressive integrated moving average (ARIMA), which is a conventional statistical method, is employed in many fields to construct models for forecasting time series. Although ARIMA can be adopted to obtain a highly accurate linear forecasting model, it cannot accurately forecast nonlinear time series. Artificial neural network (ANN) can be utilized to construct more accurate forecasting model than ARIMA for nonlinear time series, but explaining the meaning of the hidden layers of ANN is difficult and, moreover, it does not yield a mathematical equation. This study proposes a hybrid forecasting model for nonlinear time series by combining ARIMA with genetic programming (GP) to improve upon both the ANN and the ARIMA forecasting models. Finally, some real data sets are adopted to demonstrate the effectiveness of the proposed forecasting model. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "54c9c1323a03f0ef3af5eea204fd51ce",
"text": "The fabrication and characterization of magnetic sensors consisting of double magnetic layers are described. Both thin film based material and wire based materials were used for the double layers. The sensor elements were fabricated by patterning NiFe/CoFe multilayer thin films. This thin film based sensor exhibited a constant output voltage per excitation magnetic field at frequencies down to 0.1 Hz. The magnetic sensor using a twisted FeCoV wire, the conventional material for the Wiegand effect, had the disadvantage of an asymmetric output voltage generated by an alternating magnetic field. It was found that the magnetic wire whose ends were both slightly etched exhibited a symmetric output voltage.",
"title": ""
},
{
"docid": "f917a32b3bfed48dfe14c05d248ef53f",
"text": "Recently Adleman has shown that a small traveling salesman problem can be solved by molecular operations. In this paper we show how the same principles can be applied to breaking the Data Encryption Standard (DES). We describe in detail a library of operations which are useful when working with a molecular computer. We estimate that given one arbitrary (plain-text, cipher-text) pair, one can recover the DES key in about 4 months of work. Furthermore, we show that under chosen plain-text attack it is possible to recover the DES key in one day using some preprocessing. Our method can be generalized to break any cryptosystem which uses keys of length less than 64 bits.",
"title": ""
},
{
"docid": "1315349a48c402398c7c4164c92e95bf",
"text": "Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the \"properties\" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).",
"title": ""
},
{
"docid": "70bce8834a23bc84bea7804c58bcdefe",
"text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.",
"title": ""
},
{
"docid": "d318f73ccfd1069acbf7e95596fb1028",
"text": "In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.",
"title": ""
},
{
"docid": "5aa20cb4100085a12d02c6789ad44097",
"text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.",
"title": ""
},
{
"docid": "cf2e23cddb72b02d1cca83b4c3bf17a8",
"text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr",
"title": ""
},
{
"docid": "329486a3e7f13f79c9b02365ff555fdf",
"text": "A novel ultra-wideband (UWB) bandpass filter (BPF) with improved upper stopband performance using a defected ground structure (DGS) is presented in this letter. The proposed BPF is composed of seven DGSs that are positioned under the input and output microstrip line and coupled double step impedance resonator (CDSIR). By using CDSIR and open loop defected ground structure (OLDGS), we can achieve UWB BPF characteristics, and by using the conventional CDGSs under the input and output microstrip line, we can improve the upper stopband performance. Simulated and measured results are found in good agreement with each other, showing a wide passband from 3.4 to 10.9 GHz, minimum insertion loss of 0.61 dB at 7.02 GHz, a group delay variation of less than 0.4 ns in the operating band, and a wide upper stopband with more than 30 dB attenuation up to 20 GHz. In addition, the proposed UWB BPF has a compact size (0.27¿g ~ 0.29¿g , ¿g : guided wavelength at the central frequency of 6.85 GHz).",
"title": ""
},
{
"docid": "4c004745828100f6ccc6fd660ee93125",
"text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.",
"title": ""
},
{
"docid": "36fb4d86453a2e73c2989c04286b2ee2",
"text": "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.",
"title": ""
},
{
"docid": "dbd06c81892bc0535e2648ee21cb00b4",
"text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.",
"title": ""
},
{
"docid": "682fe9a6e4e30a38ce5c05ee1f809bd1",
"text": "3 chapter This chapter examines the effects of fiscal consolidation —tax hikes and government spending cuts—on economic activity. Based on a historical analysis of fiscal consolidation in advanced economies, and on simulations of the IMF's Global Integrated Monetary and Fiscal Model (GIMF), it finds that fiscal consolidation typically reduces output and raises unemployment in the short term. At the same time, interest rate cuts, a fall in the value of the currency, and a rise in net exports usually soften the contractionary impact. Consolidation is more painful when it relies primarily on tax hikes; this occurs largely because central banks typically provide less monetary stimulus during such episodes, particularly when they involve indirect tax hikes that raise inflation. Also, fiscal consolidation is more costly when the perceived risk of sovereign default is low. These findings suggest that budget deficit cuts are likely to be more painful if they occur simultaneously across many countries, and if monetary policy is not in a position to offset them. Over the long term, reducing government debt is likely to raise output, as real interest rates decline and the lighter burden of interest payments permits cuts to distortionary taxes. Budget deficits and government debt soared during the Great Recession. In 2009, the budget deficit averaged about 9 percent of GDP in advanced economies, up from only 1 percent of GDP in 2007. 1 By the end of 2010, government debt is expected to reach about 100 percent of GDP—its highest level in 50 years. Looking ahead, population aging could create even more serious problems for public finances. In response to these worrisome developments, virtually all advanced economies will face the challenge of fiscal consolidation. Indeed, many governments are already undertaking or planning The main authors of this chapter are Daniel Leigh (team leader), Advanced economies are defined as the 33 economies so designated based on the World Economic Outlook classification described in the Statistical Appendix. large spending cuts and tax hikes. An important and timely question is, therefore, whether fiscal retrenchment will hurt economic performance. Although there is widespread agreement that reducing debt has important long-term benefits, there is no consensus regarding the short-term effects of fiscal austerity. On the one hand, the conventional Keynesian view is that cutting spending or raising taxes reduces economic activity in the short term. On the other hand, a number of studies present evidence that cutting budget deficits can …",
"title": ""
},
{
"docid": "b8c48e65558504284849e05c9d3f1a19",
"text": "Applications in radar systems and communications systems require very often antennas with beam steering or multi beam capabilities. For the millimeter frequency range Rotman lenses can be useful as multiple beam forming networks for linear antennas providing the advantage of broadband performance. The design and development of Rotman lens at 220 GHz feeding an antenna array for beam steering applications is presented. The construction is completely realized in waveguide technology. Experimental results are compared with theoretical considerations and electromagnetic simulations.",
"title": ""
},
{
"docid": "b7bf3ae864ce774874041b0e5308323f",
"text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.",
"title": ""
},
{
"docid": "35ffdb3e5b2ac637f7e8d796c4cdc97e",
"text": "Pedestrian detection in real world scenes is a challenging problem. In recent years a variety of approaches have been proposed, and impressive results have been reported on a variety of databases. This paper systematically evaluates (1) various local shape descriptors, namely Shape Context and Local Chamfer descriptor and (2) four different interest point detectors for the detection of pedestrians. Those results are compared to the standard global Chamfer matching approach. A main result of the paper is that Shape Context trained on real edge images rather than on clean pedestrian silhouettes combined with the Hessian-Laplace detector outperforms all other tested approaches.",
"title": ""
},
{
"docid": "32b2cd6b63c6fc4de5b086772ef9d319",
"text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.",
"title": ""
},
{
"docid": "f7239ce387f17b279263e6bdaff612d0",
"text": "Purpose – This survey aims to study and analyze current techniques and methods for context-aware web service systems, to discuss future trends and propose further steps on making web services systems context-aware. Design/methodology/approach – The paper analyzes and compares existing context-aware web service-based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi-organization support and level of web services implementation. Findings – Supporting context-aware web service-based systems is increasing. It is hard to find a truly context-aware web service-based system that is interoperable and secure, and operates on multi-organizational environments. Various issues, such as distributed context management, context-aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed. Research limitations/implications – The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up-to-date information and development might not be taken into account. Originality/value – Existing surveys do not focus on context-awareness techniques for web services. This paper helps to understand the state of the art in context-aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.",
"title": ""
},
{
"docid": "995ad137b6711f254c6b9852611242b5",
"text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.",
"title": ""
}
] | scidocsrr |
3a679b1cf471a4c3223668d27ae4f340 | Understanding the requirements for developing open source software systems | [
{
"docid": "c63d32013627d0bcea22e1ad62419e62",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
}
] | [
{
"docid": "f944f5e334a127cd50ab3ec0d3c2b603",
"text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity",
"title": ""
},
{
"docid": "2ddf013dc4e0fc5e35823e0485777066",
"text": "The aim of this work is to design a SLAM algorithm for localization and mapping of aerial platform for ocean observation. The aim is to determine the direction of travel, given that the aerial platform flies over the water surface and in an environment with few static features and dynamic background. This approach is inspired by the bird techniques which use landmarks as navigation direction. In this case, the blimp is chosen as the platform, therefore the payload is the most important concern in the design so that the desired lift can be achieved. The results show the improved SLAM is were able to achieve the desired waypoint.",
"title": ""
},
{
"docid": "934532bd18f37112c7362db0fffa89a0",
"text": "Combination therapies exploit the chances for better efficacy, decreased toxicity, and reduced development of drug resistance and owing to these advantages, have become a standard for the treatment of several diseases and continue to represent a promising approach in indications of unmet medical need. In this context, studying the effects of a combination of drugs in order to provide evidence of a significant superiority compared to the single agents is of particular interest. Research in this field has resulted in a large number of papers and revealed several issues. Here, we propose an overview of the current methodological landscape concerning the study of combination effects. First, we aim to provide the minimal set of mathematical and pharmacological concepts necessary to understand the most commonly used approaches, divided into effect-based approaches and dose-effect-based approaches, and introduced in light of their respective practical advantages and limitations. Then, we discuss six main common methodological issues that scientists have to face at each step of the development of new combination therapies. In particular, in the absence of a reference methodology suitable for all biomedical situations, the analysis of drug combinations should benefit from a collective, appropriate, and rigorous application of the concepts and methods reviewed here.",
"title": ""
},
{
"docid": "fd45363f75f9206aa13e139d784e5d52",
"text": "Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.",
"title": ""
},
{
"docid": "3380a9a220e553d9f7358739e3f28264",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "c4062390a6598f4e9407d29e52c1a3ed",
"text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.",
"title": ""
},
{
"docid": "fc509e8f8c0076ad80df5ff6ee6b6f1e",
"text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.",
"title": ""
},
{
"docid": "60664c058868f08a67d14172d87a4756",
"text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.",
"title": ""
},
{
"docid": "98df4ff146fe0067c87a3b5514ea0934",
"text": "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.",
"title": ""
},
{
"docid": "9afc0411331ac43bc54df639760813af",
"text": "Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",
"title": ""
},
{
"docid": "cbfffcdb150143ccacaf3700aadea59e",
"text": "Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"title": ""
},
{
"docid": "6f05e76961d4ef5fc173bafd5578081f",
"text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.",
"title": ""
},
{
"docid": "e4e0e01b3af99dfd88ff03a1057b40d3",
"text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.",
"title": ""
},
{
"docid": "7bfd3237b1a4c3c651b4c5389019f190",
"text": "Recent developments in web technologies including evolution of web standards, improvements in browser performance, and the emergence of free and open-source software (FOSS) libraries are driving a general shift from server-side to client-side web applications where a greater share of the computational load is transferred to the browser. Modern client-side approaches allow for improved user interfaces that rival traditional desktop software, as well as the ability to perform simulations and visualizations within the browser. We demonstrate the use of client-side technologies to create an interactive web application for a simulation model of biochemical oxygen demand and dissolved oxygen in rivers called the Webbased Interactive River Model (WIRM). We discuss the benefits, limitations and potential uses of client-side web applications, and provide suggestions for future research using new and upcoming web technologies such as offline access and local data storage to create more advanced client-side web applications for environmental simulation modeling. 2014 Elsevier Ltd. All rights reserved. Software availability Product Title: Web-based Interactive River Model (WIRM) Developer: Jeffrey D. Walker Contact Address: Dept. of Civil and Environmental Engineering, Tufts University, 200 College Ave, Medford, MA 02155 Contact E-mail: [email protected] Available Since: 2013 Programming Language: JavaScript, Python Availability: http://wirm.walkerjeff.com/ Cost: Free",
"title": ""
},
{
"docid": "1616d9fb3fb2b2a3c97f0bf1d36d8b79",
"text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.",
"title": ""
},
{
"docid": "bcdb0e6dcbab8fcccfea15edad00a761",
"text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.",
"title": ""
},
{
"docid": "aad2d6385cb8c698a521caea00fe56d2",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "5392e45840929b05b549a64a250774e5",
"text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.",
"title": ""
},
{
"docid": "1e80f38e3ccc1047f7ee7c2b458c0beb",
"text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95",
"title": ""
},
{
"docid": "a987f009509e9c4f5c29b27275713eac",
"text": "PURPOSE\nThis article provides a critical overview of problem-based learning (PBL), its effectiveness for knowledge acquisition and clinical performance, and the underlying educational theory. The focus of the paper is on (1) the credibility of claims (both empirical and theoretical) about the ties between PBL and educational outcomes and (2) the magnitude of the effects.\n\n\nMETHOD\nThe author reviewed the medical education literature, starting with three reviews published in 1993 and moving on to research published from 1992 through 1998 in the primary sources for research in medical education. For each study the author wrote a summary, which included study design, outcome measures, effect sizes, and any other information relevant to the research conclusion.\n\n\nRESULTS AND CONCLUSION\nThe review of the literature revealed no convincing evidence that PBL improves knowledge base and clinical performance, at least not of the magnitude that would be expected given the resources required for a PBL curriculum. The results were considered in light of the educational theory that underlies PBL and its basic research. The author concludes that the ties between educational theory and research (both basic and applied) are loose at best.",
"title": ""
}
] | scidocsrr |
63ca519ffc2a3524c53956d8e96867aa | Control-flow integrity principles, implementations, and applications | [
{
"docid": "83c81ecb870e84d4e8ab490da6caeae2",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
}
] | [
{
"docid": "fb02f47ab50ebe817175f21f7192ae6b",
"text": "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4%. In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.",
"title": ""
},
{
"docid": "97a1d44956f339a678da4c7a32b63bf6",
"text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",
"title": ""
},
{
"docid": "6a1a9c6cb2da06ee246af79fdeedbed9",
"text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review",
"title": ""
},
{
"docid": "1fde3c7d8109d5d4bfcf1f55facf7a95",
"text": "Concerted research effort since the nineteen fifties has lead to effective methods for retrieval of relevant documents from homogeneous collections of text, such as newspaper archives, scientific abstracts and CD-ROM encyclopaedias. However, the triumph of the Web in the nineteen nineties forced a significant paradigm shift in the Information Retrieval field because of the need to address the issues of enormous scale, fluid collection definition, great heterogeneity, unfettered interlinking, democratic publishing, the presence of adversaries and most of all the diversity of purposes for which Web search may be used. Now, the IR field is confronted with a challenge of similarly daunting dimensions – how to bring highly effective search to the complex information spaces within enterprises. Overcoming the challenge would bring massive economic benefit, but victory is far from assured. The present work characterises enterprise search, hints at its economic magnitude, states some of the unsolved research questions in the domain of enterprise search need, proposes an enterprise search test collection and presents results for a small but interesting subproblem.",
"title": ""
},
{
"docid": "f665852770ef2f57cbb5c614410440bf",
"text": "Blockchain is a distributed database which is cryptographically protected against malicious modifications. While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers. The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks, so parties with access to quantum computation would have unfair advantage in procuring mining rewards. Here we propose a possible solution to the quantum-era blockchain challenge and report an experimental realization of a quantum-safe blockchain platform that utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication. These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications.",
"title": ""
},
{
"docid": "519172fb24e370a24da92711d827bf77",
"text": "We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the executionguided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.",
"title": ""
},
{
"docid": "0bbabbcc08ea494330b1675445851f9d",
"text": "One trend in the implementation of modern web systems is the use of activity data in the form of log or event messages that capture user and server activity. This data is at the heart of many internet systems in the domains of advertising, relevance, search, recommendation systems, and security, as well as continuing to fulfill its traditional role in analytics and reporting. Many of these uses place real-time demands on data feeds. Activity data is extremely high volume and real-time pipelines present new design challenges. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. This pipeline currently runs in production at LinkedIn and handles more than 10 billion message writes each day with a sustained peak of over 172,000 messages per second. Kafka supports dozens of subscribing systems and delivers more than 55 billion messages to these consumer processing each day. We discuss the origins of this systems, missteps on the path to real-time, and the design and engineering problems we encountered along the way.",
"title": ""
},
{
"docid": "34a8413935d1724c626f505421480f54",
"text": "In this paper, we introduce the Reinforced Mnemonic Reader for machine comprehension (MC) task, which aims to answer a query about a given context document. We propose several novel mechanisms that address critical problems in MC that are not adequately solved by previous works, such as enhancing the capacity of encoder, modeling long-term dependencies of contexts, refining the predicted answer span, and directly optimizing the evaluation metric. Extensive experiments on TriviaQA and Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-theart results.",
"title": ""
},
{
"docid": "5ce82b8c2cc87ae84026d230f3a97e06",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "b94673776041fe6463edccf06a4ed205",
"text": "This paper explores the current affordances and limitations of video game genre from a library and information science perspective with an emphasis on classification theory. We identify and discuss various purposes of genre relating to video games, including identity, collocation and retrieval, commercial marketing, and educational instruction. Through the use of examples, we discuss the ways in which these purposes are supported by genre classification and conceptualization, and the implications for video games. Suggestions for improved conceptualizations such as family resemblances, prototype theory, faceted classification, and appeal factors for video game genres are considered, with discussions of strengths and weaknesses. This analysis helps inform potential future practical applications for describing video games at cultural heritage institutions such as libraries, museums, and archives, as well as furthering the understanding of video game genre and genre classification for game studies at large. 3 Running head: WHY VIDEO GAME GENRES FAIL",
"title": ""
},
{
"docid": "ca4696183f72882d2f69cc17ab761ef3",
"text": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series.",
"title": ""
},
{
"docid": "1950bc738c3a47a8314b5d44056d9731",
"text": "BACKGROUND\nThe discovery of abnormal synchronization of neuronal activity in the basal ganglia in Parkinson's disease (PD) has prompted the development of novel neuromodulation paradigms. Coordinated reset neuromodulation intends to specifically counteract excessive synchronization and to induce cumulative unlearning of pathological synaptic connectivity and neuronal synchrony.\n\n\nMETHODS\nIn this prospective case series, six PD patients were evaluated before and after coordinated reset neuromodulation according to a standardized protocol that included both electrophysiological recordings and clinical assessments.\n\n\nRESULTS\nCoordinated reset neuromodulation of the subthalamic nucleus (STN) applied to six PD patients in an externalized setting during three stimulation days induced a significant and cumulative reduction of beta band activity that correlated with a significant improvement of motor function.\n\n\nCONCLUSIONS\nThese results highlight the potential effects of coordinated reset neuromodulation of the STN in PD patients and encourage further development of this approach as an alternative to conventional high-frequency deep brain stimulation in PD.",
"title": ""
},
{
"docid": "4b3813fdf16d9c020ec1ad1ddd56d1d3",
"text": "In this paper we describe a method that can be used for Minimum Bayes Risk (MBR) decoding for speech recognition. Our algorithm can take as input either a single lattice, or multiple lattices for system combination. It has similar functionality to the widely used Consensus method, but has a clearer theoretical basis and appears to give better results both for MBR decoding and system combination. Many different approximations have been described to solve the MBR decoding problem, which is very difficult from an optimization point of view. Our proposed method solves the problem through a novel forward–backward recursion on the lattice, not requiring time markings. We prove that our algorithm iteratively improves a bound on the Bayes risk. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "6c29713df5186553bee555024bf8c135",
"text": "This paper describes the organization and results of the automatic keyphrase extraction task held at the workshop on Semantic Evaluation 2010 (SemEval-2010). The keyphrase extraction task was specifically geared towards scientific articles. Systems were automatically evaluated by matching their extracted keyphrases against those assigned by the authors as well as the readers to the same documents. We outline the task, present the overall ranking of the submitted systems, and discuss the improvements to the state-of-the-art in keyphrase extraction.",
"title": ""
},
{
"docid": "29d43e9ec2afa314c4a00f26ce816e7e",
"text": "The aim of this paper is to discuss about various feature selection algorithms applied on different datasets to select the relevant features to classify data into binary and multi class in order to improve the accuracy of the classifier. Recent researches in medical diagnose uses the different kind of classification algorithms to diagnose the disease. For predicting the disease, the classification algorithm produces the result as binary class. When there is a multiclass dataset, the classification algorithm reduces the dataset into a binary class for simplification purpose by using any one of the data reduction methods and the algorithm is applied for prediction. When data reduction on original dataset is carried out, the quality of the data may degrade and the accuracy of an algorithm will get affected. To maintain the effectiveness of the data, the multiclass data must be treated with its original form without maximum reduction, and the algorithm can be applied on the dataset for producing maximum accuracy. Dataset with maximum number of attributes like thousands must incorporate the best feature selection algorithm for selecting the relevant features to reduce the space and time complexity. The performance of Classification algorithm is estimated by how accurately it predicts the individual class on particular dataset. The accuracy constrain mainly depends on the selection of appropriate features from the original dataset. The feature selection algorithms play an important role in classification for better performance. The feature selection is one of",
"title": ""
},
{
"docid": "a79d4b0a803564f417236f2450658fe0",
"text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "ecd54b6fad0a1d79440204df72b977fa",
"text": "The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.",
"title": ""
},
{
"docid": "0520c57f2cd13ce423e656d89c7f3cc0",
"text": "The term ‘‘urban stream syndrome’’ describes the consistently observed ecological degradation of streams draining urban land. This paper reviews recent literature to describe symptoms of the syndrome, explores mechanisms driving the syndrome, and identifies appropriate goals and methods for ecological restoration of urban streams. Symptoms of the urban stream syndrome include a flashier hydrograph, elevated concentrations of nutrients and contaminants, altered channel morphology, and reduced biotic richness, with increased dominance of tolerant species. More research is needed before generalizations can be made about urban effects on stream ecosystem processes, but reduced nutrient uptake has been consistently reported. The mechanisms driving the syndrome are complex and interactive, but most impacts can be ascribed to a few major large-scale sources, primarily urban stormwater runoff delivered to streams by hydraulically efficient drainage systems. Other stressors, such as combined or sanitary sewer overflows, wastewater treatment plant effluents, and legacy pollutants (long-lived pollutants from earlier land uses) can obscure the effects of stormwater runoff. Most research on urban impacts to streams has concentrated on correlations between instream ecological metrics and total catchment imperviousness. Recent research shows that some of the variance in such relationships can be explained by the distance between the stream reach and urban land, or by the hydraulic efficiency of stormwater drainage. The mechanisms behind such patterns require experimentation at the catchment scale to identify the best management approaches to conservation and restoration of streams in urban catchments. Remediation of stormwater impacts is most likely to be achieved through widespread application of innovative approaches to drainage design. Because humans dominate urban ecosystems, research on urban stream ecology will require a broadening of stream ecological research to integrate with social, behavioral, and economic research.",
"title": ""
},
{
"docid": "244a517d3a1c456a602ecc01fb99a78f",
"text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.",
"title": ""
}
] | scidocsrr |
de79780405e5472df23ace00ec371380 | A comprehensive study of the predictive accuracy of dynamic change-impact analysis | [
{
"docid": "cc9686bac7de957afe52906763799554",
"text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.",
"title": ""
}
] | [
{
"docid": "96051404d2ca32f67c86f0eb96a87f38",
"text": "Male (N = 248) and female (N = 282) subjects were given the Personal Attributes Questionnaire consisting of 55 bipolar attributes drawn from the Sex Role Stereotype Questionnaire by Rosenkrantz, Vogel, Bee, Broverman, and Broverman and were asked to rate themselves and then to compare directly the typical male and female college student. Self-ratings were divided into male-valued (stereotypically masculine attributes judged more desirable for both sexes), female-valued, and sex-specific items. Also administered was the Attitudes Toward Women Scale and a measure of social self-esteem. Correlations of the self-ratings with stereotype scores and the Attitudes Toward Women Scale were low in magnitude, suggesting that sex role expectations do not distort self-concepts. For both men and women, \"femininity\" on the female-valued self items and \"masculinity\" on the male-valued items were positively correlated, and both significantly related to self-esteem. The implications of the results for a concept of masculinity and femininity as a duality, characteristic of all individuals, and the use of the self-rating scales for measuring masculinity, femininity, and androgyny were discussed.",
"title": ""
},
{
"docid": "cc76afb929bdffe1b084843a6b267602",
"text": "Software applications continue to grow in terms of the number of features they offer, making personalization increasingly important. Research has shown that most users prefer the control afforded by an adaptable approach to personalization rather than a system-controlled adaptive approach. Both types of approaches offer advantages and disadvantages. No study, however, has compared the efficiency of the two approaches. In two controlled lab studies, we measured the efficiency of static, adaptive and adaptable interfaces in the context of pull-down menus. These menu conditions were implemented as split menus, in which the top four items remained static, were adaptable by the subject, or adapted according to the subject’s frequently and recently used items. The results of Study 1 showed that a static split menu was significantly faster than an adaptive split menu. Also, when the adaptable split menu was not the first condition presented to subjects, it was significantly faster than the adaptive split menu, and not significantly different from the static split menu. The majority of users preferred the adaptable menu overall. Several implications for personalizing user interfaces based on these results are discussed. One question which arose after Study 1 was whether prior exposure to the menus and task has an effect on the efficiency of the adaptable menus. A second study was designed to follow-up on the theory that prior exposure to different types of menu layouts influences a user’s willingness to customize. Though the observed power of this study was low and no statistically significant effect of type of exposure was found, a possible trend arose: that exposure to an adaptive interface may have a positive impact on the user’s willingness to customize. This and other secondary results are discussed, along with several areas for future work. The research presented in this thesis should be seen as an initial step towards a more thorough comparison of adaptive and adaptable interfaces, and should provide motivation for further development of adaptable interaction techniques.",
"title": ""
},
{
"docid": "4709a4e1165abb5d0018b74495218fc7",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a26d47a7d0330e6252986358bd2f41e0",
"text": "The American College of Prosthodontists (ACP) has developed a classification system for partial edentulism based on diagnostic findings. This classification system is similar to the classification system for complete edentulism previously developed by the ACP. These guidelines are intended to help practitioners determine appropriate treatments for their patients. Four categories of partial edentulism are defined, Class I to Class IV, with Class I representing an uncomplicated clinical situation and class IV representing a complex clinical situation. Each class is differentiated by specific diagnostic criteria. This system is designed for use by dental professionals involved in the diagnosis and treatment of partially edentulous patients. Potential benefits of the system include (1) improved intraoperator consistency, (2) improved professional communication, (3) insurance reimbursement commensurate with complexity of care, (4) improved screening tool for dental school admission clinics, (5) standardized criteria for outcomes assessment and research, (6) enhanced diagnostic consistency, and (7) simplified aid in the decision to refer a patient.",
"title": ""
},
{
"docid": "570fcf7ba739ffb6ea07e5c58c8154c7",
"text": "E-learning is emerging as the new paradigm of modern education. Worldwide, the e-learning market has a growth rate of 35.6%, but failures exist. Little is known about why many users stop their online learning after their initial experience. Previous research done under different task environments has suggested a variety of factors affecting user satisfaction with e-Learning. This study developed an integrated model with six dimensions: learners, instructors, courses, technology, design, and environment. A survey was conducted to investigate the critical factors affecting learners’ satisfaction in e-Learning. The results revealed that learner computer anxiety, instructor attitude toward e-Learning, e-Learning course flexibility, e-Learning course quality, perceived usefulness, perceived ease of use, and diversity in assessments are the critical factors affecting learners’ perceived satisfaction. The results show institutions how to improve learner satisfaction and further strengthen their e-Learning implementation. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5feea8e7bcb96c826bdf19922e47c922",
"text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.",
"title": ""
},
{
"docid": "d0c5bb905973b3098b06f55232ed9c8f",
"text": "In recent years, theoretical and computational linguistics has paid much attention to linguistic items that form scales. In NLP, much research has focused on ordering adjectives by intensity (tiny < small). Here, we address the task of automatically ordering English adverbs by their intensifying or diminishing effect on adjectives (e.g. extremely small < very small). We experiment with 4 different methods: 1) using the association strength between adverbs and adjectives; 2) exploiting scalar patterns (such as not only X but Y); 3) using the metadata of product reviews; 4) clustering. The method that performs best is based on the use of metadata and ranks adverbs by their scaling factor relative to unmodified adjectives.",
"title": ""
},
{
"docid": "f8ac1e028ec61c8b1dcf8ce138ea1776",
"text": "This paper presents power-control strategies of a grid-connected hybrid generation system with versatile power transfer. The hybrid system is the combination of photovoltaic (PV) array, wind turbine, and battery storage via a common dc bus. Versatile power transfer was defined as multimodes of operation, including normal operation without use of battery, power dispatching, and power averaging, which enables grid- or user-friendly operation. A supervisory control regulates power generation of the individual components so as to enable the hybrid system to operate in the proposed modes of operation. The concept and principle of the hybrid system and its control were described. A simple technique using a low-pass filter was introduced for power averaging. A modified hysteresis-control strategy was applied in the battery converter. Modeling and simulations were based on an electromagnetic-transient-analysis program. A 30-kW hybrid inverter and its control system were developed. The simulation and experimental results were presented to evaluate the dynamic performance of the hybrid system under the proposed modes of operation.",
"title": ""
},
{
"docid": "f82a57baca9a0381c9b2af0368a5531e",
"text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.",
"title": ""
},
{
"docid": "4bb4bbd91925d2faafe5516519d6cc62",
"text": "Cyclic GMP (cGMP) modulates important cerebral processes including some forms of learning and memory. cGMP pathways are strongly altered in hyperammonemia and hepatic encephalopathy (HE). Patients with liver cirrhosis show reduced intracellular cGMP in lymphocytes, increased cGMP in plasma and increased activation of soluble guanylate cyclase by nitric oxide (NO) in lymphocytes, which correlates with minimal HE assessed by psychometric tests. Activation of soluble guanylate cyclase by NO is also increased in cerebral cortex, but reduced in cerebellum, from patients who died with HE. This opposite alteration is reproduced in vivo in rats with chronic hyperammonemia or HE. A main pathway modulating cGMP levels in brain is the glutamate-NO-cGMP pathway. The function of this pathway is impaired both in cerebellum and cortex of rats with hyperammonemia or HE. Impairment of this pathway is responsible for reduced ability to learn some types of tasks. Restoring the pathway and cGMP levels in brain restores learning ability. This may be achieved by administering phosphodiesterase inhibitors (zaprinast, sildenafil), cGMP, anti-inflammatories (ibuprofen) or antagonists of GABAA receptors (bicuculline). These data support that increasing cGMP by safe pharmacological means may be a new therapeutic approach to improve cognitive function in patients with minimal or clinical HE.",
"title": ""
},
{
"docid": "4c1798f0fd65b8d7e60a04a9a3df5201",
"text": "This study examined linkages between divorce, depressive/withdrawn parenting, and child adjustment problems at home and school. Middle class divorced single mother families (n = 35) and 2-parent families (n = 174) with a child in the fourth grade participated. Mothers and teachers completed yearly questionnaires and children were interviewed when they were in the fourth, fifth, and sixth grades. Structural equation modeling suggested that the association between divorce and child externalizing and internalizing behavior was partially mediated by depressive/withdrawn parenting when the children were in the fourth and fifth grades.",
"title": ""
},
{
"docid": "d735547a7b3a79f5935f15da3e51f361",
"text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.",
"title": ""
},
{
"docid": "7bdebaf86fd679ae00520dc8f7ee3afa",
"text": "Studies show that attractive women demonstrate stronger preferences for masculine men than relatively unattractive women do. Such condition-dependent preferences may occur because attractive women can more easily offset the costs associated with choosing a masculine partner, such as lack of commitment and less interest in parenting. Alternatively, if masculine men display negative characteristics less to attractive women than to unattractive women, attractive women may perceive masculine men to have more positive personality traits than relatively unattractive women do. We examined how two indices of women’s attractiveness, body mass index (BMI) and waist–hip ratio (WHR), relate to perceptions of both the attractiveness and trustworthiness of masculinized versus feminized male faces. Consistent with previous studies, women with a low (attractive) WHR had stronger preferences for masculine male faces than did women with a relatively high (unattractive) WHR. This relationship remained significant when controlling for possible effects of BMI. Neither WHR nor BMI predicted perceptions of trustworthiness. These findings present converging evidence for condition-dependent mate preferences in women and suggest that such preferences do not reflect individual differences in the extent to which pro-social traits are ascribed to feminine versus masculine men. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "73fb3c79018795777a0fca6d5e7d3ebe",
"text": "Congruence, the state in which a software development organization harbors sufficient coordination capabilities to meet the coordination demands of the technical products under development, is increasingly recognized as critically important to the performance of an organization. To date, it has been shown that a variety of states of incongruence may exist in an organization, with possibly serious negative effects on product quality, development progress, cost, and so on. Exactly how to achieve congruence, or knowing what steps to take to achieve congruence, is less understood. In this paper, we introduce a series of key challenges that we believe must be comprehensively addressed in order for congruence research to result in wellunderstood approaches, tactics, and tools – so these can be infused in the day-to-day practices of development organizations to improve their coordination capabilities with better aligned social and technical structures. This effort is partially funded by the National Science Foundation under grant number IIS-0534775, IIS0329090, and the Software Industry Center and its sponsors, particularly the Alfred P. Sloan Foundation. Effort also supported by a 2007 Jazz Faculty Grant. The views and conclusions are those of the authors and do not reflect the opinions of any sponsoring organizations/agencies.",
"title": ""
},
{
"docid": "2d02e5bc08c2b5d18c787880898e9af2",
"text": "Speech recognition systems have used the concept of states as a way to decompose words into sub-word units for decades. As the number of such states now reaches the number of words used to train acoustic models, it is interesting to consider approaches that relax the assumption that words are made of states. We present here an alternative construction, where words are projected into a continuous embedding space where words that sound alike are nearby in the Euclidean sense. We show how embeddings can still allow to score words that were not in the training dictionary. Initial experiments using a lattice rescoring approach and model combination on a large realistic dataset show improvements in word error rate.",
"title": ""
},
{
"docid": "36828667ce43ab5d489f74e112045639",
"text": "Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.",
"title": ""
},
{
"docid": "698dca642840f47081b1e9a54775c5cc",
"text": "Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain. Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, leftand right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny. Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate leftand right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits. Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.",
"title": ""
},
{
"docid": "a52ac0402ca65a4e7a239c343f79df44",
"text": "How does the brain cause positive affective reactions to sensory pleasure? An answer to pleasure causation requires knowing not only which brain systems are activated by pleasant stimuli, but also which systems actually cause their positive affective properties. This paper focuses on brain causation of behavioral positive affective reactions to pleasant sensations, such as sweet tastes. Its goal is to understand how brain systems generate 'liking,' the core process that underlies sensory pleasure and causes positive affective reactions. Evidence suggests activity in a subcortical network involving portions of the nucleus accumbens shell, ventral pallidum, and brainstem causes 'liking' and positive affective reactions to sweet tastes. Lesions of ventral pallidum also impair normal sensory pleasure. Recent findings regarding this subcortical network's causation of core 'liking' reactions help clarify how the essence of a pleasure gloss gets added to mere sensation. The same subcortical 'liking' network, via connection to brain systems involved in explicit cognitive representations, may also in turn cause conscious experiences of sensory pleasure.",
"title": ""
},
{
"docid": "42cfbb2b2864e57d59a72ec91f4361ff",
"text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.",
"title": ""
},
{
"docid": "fd1b82c69a3182ab7f8c0a7cf2030b6f",
"text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.",
"title": ""
}
] | scidocsrr |
289d04efc3d8f5819adf2c0de3e10913 | An $X$ -Band Lumped-Element Wilkinson Combiner With Embedded Impedance Transformation | [
{
"docid": "e9c52fb24425bff6ed514de6b92e8ba2",
"text": "This paper proposes a ultra compact Wilkinson power combiner (WPC) incorporating synthetic transmission lines at K-band in CMOS technology. The 50 % improvement on the size reduction can be achieved by increasing the slow-wave factor of synthetic transmission line. The presented Wilkinson power combiner design is analyzed and fabricated by using standard 0.18 µm 1P6M CMOS technology. The prototype has only a chip size of 480 µm × 90 µm, corresponding to 0.0002λ02 at 21.5 GHz. The measured insertion losses and return losses are less and higher than 4 dB and 17.5 dB from 16 GHz to 27 GHz, respectively. Furthermore, the proposed WPC is also integrated into the phase shifter to confirm its feasibility. The prototype of phase shifter shows 15 % size reduction and on-wafer measurements show good linearity of full 360-degree phase shifting from 21 GHz to 27 GHz.",
"title": ""
}
] | [
{
"docid": "d9870dc31895226f60537b3e8591f9fd",
"text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5a583f5b67ceb7c59da2cef8201880df",
"text": "This article presents two designs of power amplifiers to be used with piezo-electric actuators in diesel injectors. The topologies as well as the controller approach and implementation are discussed.",
"title": ""
},
{
"docid": "deeb21277f4cdb637a44941794e03359",
"text": "This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of interest rather than extrapolating into increasingly distant horizons from a given model, as it is done with vector autoregressions (VAR). The advantages of local projections are numerous: (1) they can be estimated by simple regression techniques with standard regression packages; (2) they are more robust to misspecification; (3) joint or point-wise analytic inference is simple; and (4) they easily accommodate experimentation with highly non-linear and flexible specifications that may be impractical in a multivariate context. Therefore, these methods are a natural alternative to estimating impulse responses from VARs. Monte Carlo evidence and an application to a simple, closed-economy, new-Keynesian model clarify these numerous advantages. •",
"title": ""
},
{
"docid": "b324860905b6d8c4b4a8429d53f2543d",
"text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.",
"title": ""
},
{
"docid": "163d7e9a00649b3a6036507f6a725af8",
"text": "In the last decades, a lot of 3D face recognition techniques have been proposed. They can be divided into three parts, holistic matching techniques, feature-based techniques and hybrid techniques. In this paper, a hybrid technique is used, where, a prototype of a new hybrid face recognition technique depends on 3D face scan images are designed, simulated and implemented. Some geometric rules are used for analyzing and mapping the face. Image processing is used to get the twodimensional values of predetermined and specific facial points, software programming is used to perform a three-dimensional coordinates of the predetermined points and to calculate several geometric parameter ratios and relations. Neural network technique is used for processing the calculated geometric parameters and then performing facial recognition. The new design is not affected by variant pose, illumination and expression and has high accurate level compared with the 2D analysis. Moreover, the proposed algorithm is of higher performance than latest’s published biometric recognition algorithms in terms of cost, confidentiality of results, and availability of design tools.",
"title": ""
},
{
"docid": "ea544ffc7eeee772388541d0d01812a7",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "0e4d0ecdc46b05c916b782a0594acd63",
"text": "iii Acknowledgements iv Chapter",
"title": ""
},
{
"docid": "72d863c7e323cd9b3ab4368a51743319",
"text": "STUDY DESIGN\nThis study is a retrospective review of the initial enrollment data from a prospective multicentered study of adult spinal deformity.\n\n\nOBJECTIVES\nThe purpose of this study is to correlate radiographic measures of deformity with patient-based outcome measures in adult scoliosis.\n\n\nSUMMARY OF BACKGROUND DATA\nPrior studies of adult scoliosis have attempted to correlate radiographic appearance and clinical symptoms, but it has proven difficult to predict health status based on radiographic measures of deformity alone. The ability to correlate radiographic measures of deformity with symptoms would be useful for decision-making and surgical planning.\n\n\nMETHODS\nThe study correlates radiographic measures of deformity with scores on the Short Form-12, Scoliosis Research Society-29, and Oswestry profiles. Radiographic evaluation was performed according to an established positioning protocol for anteroposterior and lateral 36-inch standing radiographs. Radiographic parameters studied were curve type, curve location, curve magnitude, coronal balance, sagittal balance, apical rotation, and rotatory subluxation.\n\n\nRESULTS\nThe 298 patients studied include 172 with no prior surgery and 126 who had undergone prior spine fusion. Positive sagittal balance was the most reliable predictor of clinical symptoms in both patient groups. Thoracolumbar and lumbar curves generated less favorable scores than thoracic curves in both patient groups. Significant coronal imbalance of greater than 4 cm was associated with deterioration in pain and function scores for unoperated patients but not in patients with previous surgery.\n\n\nCONCLUSIONS\nThis study suggests that restoration of a more normal sagittal balance is the critical goal for any reconstructive spine surgery. The study suggests that magnitude of coronal deformity and extent of coronal correction are less critical parameters.",
"title": ""
},
{
"docid": "8c8e9332a29edb7417ad47b045bf9de7",
"text": "Knowledge and lessons from past accidental exposures in radiotherapy are very helpful in finding safety provisions to prevent recurrence. Disseminating lessons is necessary but not sufficient. There may be additional latent risks for other accidental exposures, which have not been reported or have not occurred, but are possible and may occur in the future if not identified, analyzed, and prevented by safety provisions. Proactive methods are available for anticipating and quantifying risk from potential event sequences. In this work, proactive methods, successfully used in industry, have been adapted and used in radiotherapy. Risk matrix is a tool that can be used in individual hospitals to classify event sequences in levels of risk. As with any anticipative method, the risk matrix involves a systematic search for potential risks; that is, any situation that can cause an accidental exposure. The method contributes new insights: The application of the risk matrix approach has identified that another group of less catastrophic but still severe single-patient events may have a higher probability, resulting in higher risk. The use of the risk matrix approach for safety assessment in individual hospitals would provide an opportunity for self-evaluation and managing the safety measures that are most suitable to the hospital's own conditions.",
"title": ""
},
{
"docid": "3355c37593ee9ef1b2ab29823ca8c1d4",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
},
{
"docid": "bf28cac251558f59aab6b49a373a8fba",
"text": "Digital game play is becoming increasingly prevalent. Its participant-players number in the millions and its revenues are in billions of dollars. As they grow in popularity, digital games are also growing in complexity, depth and sophistication. This paper presents reasons why games and game play matter to the future of education. Drawing upon these works, the potential for instruction in digital games is recognised. Previous works in the area were also analysed with respect to their theoretical findings. We then propose a framework for digital Game-based Learning approach for adoption in education setting.",
"title": ""
},
{
"docid": "4028f1eb3f14297fea30ae43fdf7fbb6",
"text": "The optimisation of a tail-sitter UAV (Unmanned Aerial Vehicle) that uses a stall-tumble manoeuvre to transition from vertical to horizontal flight and a pull-up manoeuvre to regain the vertical is investigated. The tandem wing vehicle is controlled in the hover and vertical flight phases by prop-wash over wing mounted control surfaces. It represents an innovative and potentially simple solution to the dual requirements of VTOL (Vertical Take-off and Landing) and high speed forward flight by obviating the need for complex mechanical systems such as rotor heads or tilt-rotor systems.",
"title": ""
},
{
"docid": "cb641fc639b86abadec4f85efc226c14",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "22a5aa4b9cbafa3cf63b6cf4aff60ba3",
"text": "characteristics, burnout, and (other-ratings of) performance (N 146). We hypothesized that job demands (e.g., work pressure and emotional demands) would be the most important antecedents of the exhaustion component of burnout, which, in turn, would predict in-role performance (hypothesis 1). In contrast, job resources (e.g., autonomy and social support) were hypothesized to be the most important predictors of extra-role performance, through their relationship with the disengagement component of burnout (hypothesis 2). In addition, we predicted that job resources would buffer the relationship between job demands and exhaustion (hypothesis 3), and that exhaustion would be positively related to disengagement (hypothesis 4). The results of structural equation modeling analyses provided strong support for hypotheses 1, 2, and 4, but rejected hypothesis 3. These findings support the JD-R model’s claim that job demands and job resources initiate two psychological processes, which eventually affect organizational outcomes. © 2004 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "850a7daa56011e6c53b5f2f3e33d4c49",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "19a28d8bbb1f09c56f5c85be003a9586",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "c6954957e6629a32f9845df15c60be85",
"text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.",
"title": ""
},
{
"docid": "1e3585a27b6373685544dc392140a4fb",
"text": "When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.",
"title": ""
},
{
"docid": "7182c5b1fac4a4d0d43a15c1feb28be1",
"text": "This paper provides an objective evaluation of the performance impacts of binary XML encodings, using a fast stream-based XQuery processor as our representative application. Instead of proposing one binary format and comparing it against standard XML parsers, we investigate the individual effects of several binary encoding techniques that are shared by many proposals. Our goal is to provide a deeper understanding of the performance impacts of binary XML encodings in order to clarify the ongoing and often contentious debate over their merits, particularly in the domain of high performance XML stream processing.",
"title": ""
}
] | scidocsrr |
65a1853af116c63a9854549e34fd9d75 | Texture-aware ASCII art synthesis with proportional fonts | [
{
"docid": "921b024ca0a99e3b7cd3a81154d70c66",
"text": "Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.",
"title": ""
},
{
"docid": "07a1d62b56bd1e2acf4282f69e85fb93",
"text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.",
"title": ""
}
] | [
{
"docid": "3d4cfb2d3ba1e70e5dd03060f5d5f663",
"text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.",
"title": ""
},
{
"docid": "081da5941b0431d00b4058c26987d43f",
"text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "98e9d8fb4a04ad141b3a196fe0a9c08b",
"text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.",
"title": ""
},
{
"docid": "f24f686a705a1546d211ac37d5cc2fdb",
"text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.",
"title": ""
},
{
"docid": "894e4f975ce81a181025e65227e70b18",
"text": "Gesturing and motion control have become common as interaction methods for video games since the advent of the Nintendo Wii game console. Despite the growing number of motion-based control platforms for video games, no set of shared design heuristics for motion control across the platforms has been published. Our approach in this paper combines analysis of player experiences across platforms. We work towards a collection of design heuristics for motion-based control by studying game reviews in two motion-based control platforms, Xbox 360 Kinect and PlayStation 3 Move. In this paper we present an analysis of player problems within 256 game reviews, on which we ground a set of heuristics for motion-controlled games.",
"title": ""
},
{
"docid": "c89f44a3216a9411a42cb0a420f4b73b",
"text": "Chemical fiber paper tubes are the essential spinning equipment on filament high-speed spinning and winding machine of the chemical fiber industry. The precision of its application directly impacts on the formation of the silk, determines the cost of the spinning industry. Due to the accuracy of its application requirements, the paper tubes with defects must be detected and removed. Traditional industrial defect detection methods are usually carried out using the target operator's characteristics, only to obtain surface information, not only the detection efficiency and accuracy is difficult to improve, due to human judgment, it's difficult to give effective algorithm for some targets. And the existing learning algorithms are also difficult to use the deep features, so they can not get good results. Based on the Faster-RCNN method in depth learning, this paper extracts the deep features of the defective target by Convolutional Neural Network (CNN), which effectively solves the internal joint defects that the traditional algorithm can not effectively detect. As to the external joints and damaged flaws that the traditional algorithm can detect, this algorithm has better results, the experimental accuracy rate can be raised up to 98.00%. At the same time, it can be applied to a variety of lighting conditions, reducing the pretreatment steps and improving efficiency. The experimental results show that the method is effective and worthy of further research.",
"title": ""
},
{
"docid": "299e7f7d1c48d4a6a22c88dcf422f7a1",
"text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.",
"title": ""
},
{
"docid": "6bbc32ecaf54b9a51442f92edbc2604a",
"text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.",
"title": ""
},
{
"docid": "407574abdcba82be2e9aea5a9b38c0a3",
"text": "In this paper, we investigate resource block (RB) assignment and modulation-and-coding scheme (MCS) selection to maximize downlink throughput of long-term evolution (LTE) systems, where all RB's assigned to the same user in any given transmission time interval (TTI) must use the same MCS. We develop several effective MCS selection schemes by using the effective packet-level SINR based on exponential effective SINR mapping (EESM), arithmetic mean, geometric mean, and harmonic mean. From both analysis and simulation results, we show that the system throughput of all the proposed schemes are better than that of the scheme in [7]. Furthermore, the MCS selection scheme using harmonic mean based effective packet-level SINR almost reaches the optimal performance and significantly outperforms the other proposed schemes.",
"title": ""
},
{
"docid": "1d51506f851a8b125edd7edcd8c6bd1b",
"text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.",
"title": ""
},
{
"docid": "a49c8e6f222b661447d1de32e29d0f16",
"text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.",
"title": ""
},
{
"docid": "703f0baf67a1de0dfb03b3192327c4cf",
"text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.",
"title": ""
},
{
"docid": "815feed9cce2344872c50da6ffb77093",
"text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.",
"title": ""
},
{
"docid": "d214ef50a5c26fb65d8c06ea7db3d07c",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "b7c0864be28d70d49ae4a28fb7d78f04",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "d5a9d2a212deee5057a0289f72b51d9b",
"text": "Compared to supervised feature selection, unsupervised feature selection tends to be more challenging due to the lack of guidance from class labels. Along with the increasing variety of data sources, many datasets are also equipped with certain side information of heterogeneous structure. Such side information can be critical for feature selection when class labels are unavailable. In this paper, we propose a new feature selection method, SideFS, to exploit such rich side information. We model the complex side information as a heterogeneous network and derive instance correlations to guide subsequent feature selection. Representations are learned from the side information network and the feature selection is performed in a unified framework. Experimental results show that the proposed method can effectively enhance the quality of selected features by incorporating heterogeneous side information.",
"title": ""
},
{
"docid": "3294f746432ba9746a8cc8082a1021f7",
"text": "CRYPTONITE is a programmable processor tailored to the needs of crypto algorithms. The design of CRYPTONITE was based on an in-depth application analysis in which standard crypto algorithms (AES, DES, MD5, SHA-1, etc) were distilled down to their core functionality. We describe this methodology and use AES as a central example. Starting with a functional description of AES, we give a high level account of how to implement AES efficiently in hardware, and present several novel optimizations (which are independent of CRYPTONITE).We then describe the CRYPTONITE architecture, highlighting how AES implementation issues influenced the design of the processor and its instruction set. CRYPTONITE is designed to run at high clock rates and be easy to implement in silicon while providing a significantly better performance/area/power tradeoff than general purpose processors.",
"title": ""
},
{
"docid": "f9765c97a101a163a486b18e270d67f5",
"text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2",
"title": ""
},
{
"docid": "1ed9151f81e15db5bb08a7979d5eeddb",
"text": "Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.",
"title": ""
},
{
"docid": "808de7fe99686dabb5b1ea28187cd406",
"text": "Automated Guided Vehicles (AGVs) are being increasingly used for intelligent transportation and distribution of materials in warehouses and auto-production lines. In this paper, a preliminary hazard analysis of an AGV’s critical components is conducted by the approach of Failure Modes Effects and Criticality Analysis (FMECA). To implement this research, a particular AGV transport system is modelled as a phased mission. Then, Fault Tree Analysis (FTA) is adopted to model the causes of phase failure, enabling the probability of success in each phase and hence mission success to be determined. Through this research, a promising technical approach is established, which allows the identification of the critical AGV components and crucial mission phases of AGVs at the design stage. 1998 ACM Subject Classification B.8 Performance and Reliability",
"title": ""
}
] | scidocsrr |
484b12bbed6ea301f2f8b5acb6e011dd | A big data architecture for managing oceans of data and maritime applications | [
{
"docid": "ebd0d534a87c3cd25eb276ea81af1860",
"text": "As the challenge of our time, Big Data still has many research hassles, especially the variety of data. The high diversity of data sources often results in information silos, a collection of non-integrated data management systems with heterogeneous schemas, query languages, and APIs. Data Lake systems have been proposed as a solution to this problem, by providing a schema-less repository for raw data with a common access interface. However, just dumping all data into a data lake without any metadata management, would only lead to a 'data swamp'. To avoid this, we propose Constance, a Data Lake system with sophisticated metadata management over raw data extracted from heterogeneous data sources. Constance discovers, extracts, and summarizes the structural metadata from the data sources, and annotates data and metadata with semantic information to avoid ambiguities. With embedded query rewriting engines supporting structured data and semi-structured data, Constance provides users a unified interface for query processing and data exploration. During the demo, we will walk through each functional component of Constance. Constance will be applied to two real-life use cases in order to show attendees the importance and usefulness of our generic and extensible data lake system.",
"title": ""
},
{
"docid": "461ee7b6a61a6d375a3ea268081f80f5",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
}
] | [
{
"docid": "c0fd9b73e2af25591e3c939cdbed1c1a",
"text": "We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN",
"title": ""
},
{
"docid": "f9161b68fef96e0e3141e2d45effa33a",
"text": "Water molecules can be affected by magnetic fields (MF) due to their bipolar characteristics. In the present study maize plants, from sowing to the end period of generative stage, were irrigated with magnetically treated water (MTW).Tap water was treated with MF by passing through a locally designed alternative magnetic field generating apparatus (110 mT). Irrigation with MTW increased the ear length and fresh weight, 100-grain fresh and dry weights, and water productivity (119.5%, 119.1%, 114.2%, 116.6% and 122.3%, respectively), compared with the control groups. Levels of photosynthetic pigments i.e. chlorophyll a and b, and the contents of anthocyanin and flavonoids of the leaves were increased compared to those of non-treated ones. Increase of the activity of superoxide dismutase (SOD) and ascorbate peroxidase (APX) in leaves of the treated plants efficiently scavenged active oxygen species and resulted in the maintenance of photosynthetic membranes and reduction of malondealdehyde. Total ferritin, sugar, iron and calcium contents of kernels of MTW-irrigated plants were respectively 122.9%, 167.4%, 235% and 185% of the control ones. From the results presented here it can be concluded that the influence of MF on living plant cells, at least in part, is mediated by water. The results also suggest that irrigation of maize plant with MTW can be applied as a useful method for improvement of quantity and quality of it.",
"title": ""
},
{
"docid": "796ae2d702a66d7af19ac4bb6a52aa6b",
"text": "Methods for embedding secret data are more sophisticated than their ancient predecessors, but the basic principles remain unchanged.",
"title": ""
},
{
"docid": "f4380a5acaba5b534d13e1a4f09afe4f",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "6f1e71399e5786eb9c3923a1e967cd8f",
"text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23",
"title": ""
},
{
"docid": "7cf8e1e356c8e5d00bc975e001c40384",
"text": "We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs.",
"title": ""
},
{
"docid": "60ad412d0d6557d2a06e9914bbf3c680",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "60716b31303314598ac2f68d45c6cb51",
"text": "Female genital cosmetic surgery procedures have gained popularity in the West in recent years. Marketing by surgeons promotes the surgeries, but professional organizations have started to question the promotion and practice of these procedures. Despite some surgeon claims of drastic transformations of psychological, emotional, and sexual life associated with the surgery, little reliable evidence of such effects exists. This article achieves two objectives. First, reviewing the published academic work on the topic, it identifies the current state of knowledge around female genital cosmetic procedures, as well as limitations in our knowledge. Second, examining a body of critical scholarship that raises sociological and psychological concerns not typically addressed in medical literature, it summarizes broader issues and debates. Overall, the article demonstrates a paucity of scientific knowledge and highlights a pressing need to consider the broader ramifications of surgical practices. \"Today we have a whole society held in thrall to the drastic plastic of labial rejuvenation.\"( 1 ) \"At the present time, the field of female cosmetic genital surgery is like the old Wild, Wild West: wide open and unregulated\"( 2 ).",
"title": ""
},
{
"docid": "6ef6cbb60da56bfd53ae945480908d3c",
"text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.",
"title": ""
},
{
"docid": "10ebcd3a97863037b5bdab03c06dd0e1",
"text": "Nonlinear dynamical systems are ubiquitous in science and engineering, yet many issues still exist related to the analysis and prediction of these systems. Koopman theory circumvents these issues by transforming the finite-dimensional nonlinear dynamics to a linear dynamical system of functions in an infinite-dimensional Hilbert space of observables. The eigenfunctions of the Koopman operator evolve linearly in time and thus provide a natural coordinate system for simplifying the dynamical behaviors of the system. We consider a family of observable functions constructed by projecting the delay coordinates of the system onto the eigenvectors of the autocorrelation function, which can be regarded as continuous SVD basis vectors for time-delay observables. We observe that these functions are the most parsimonious basis of observables for a system with Koopman mode decomposition of order N , in the sense that the associated Koopman eigenfunctions are guaranteed to lie in the span of the first N of these coordinates. We conjecture and prove a number of theoretical results related to the quality of these approximations in the more general setting where the system has mixed spectra or the coordinates are otherwise insufficient to capture the full spectral information. We prove a related and very general result that the dynamics of the observables generated by projecting delay coordinates onto an arbitrary orthonormal basis are systemindependent and depend only on the choice of basis, which gives a highly efficient way of computing representations of the Koopman operator in these coordinates. We show that this formalism provides a theoretical underpinning for the empirical results in [8], which found that chaotic dynamical systems can be approximately factored into intermittently forced linear systems when viewed in delay coordinates. Finally, we compute these time delay observables for a number of example dynamical systems and show that empirical results match our theory.",
"title": ""
},
{
"docid": "c45b962006b2bb13ab57fe5d643e2ca6",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "be593352763133428b837f1c593f30cf",
"text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"title": ""
},
{
"docid": "5d60a9e9475acda268fc8216a98e6162",
"text": "Conventional topic modeling schemes, such as Latent Dirichlet Allocation, are known to perform inadequately when applied to tweets, due to the sparsity of short documents. To alleviate these disadvantages, we apply several pooling techniques, aggregating similar tweets into individual documents, and specifically study the aggregation of tweets sharing authors or hashtags. The results show that aggregating similar tweets into individual documents significantly increases topic coherence.",
"title": ""
},
{
"docid": "faed829d4fc252159a0ed5e7ff1eea07",
"text": "Modern cryptographic practice rests on the use of one-way functions, which are easy to evaluate but difficult to invert. Unfortunately, commonly used one-way functions are either based on unproven conjectures or have known vulnerabilities. We show that instead of relying on number theory, the mesoscopic physics of coherent transport through a disordered medium can be used to allocate and authenticate unique identifiers by physically reducing the medium's microstructure to a fixed-length string of binary digits. These physical one-way functions are inexpensive to fabricate, prohibitively difficult to duplicate, admit no compact mathematical representation, and are intrinsically tamper-resistant. We provide an authentication protocol based on the enormous address space that is a principal characteristic of physical one-way functions.",
"title": ""
},
{
"docid": "bde4e8743d2146d3ee9af39f27d14b5a",
"text": "For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks - the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4 minutes, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.",
"title": ""
},
{
"docid": "ca1d5c5da03fb9c3b6f7c023dc8f9e9c",
"text": "Recent introduction of all-oral direct-acting antiviral (DAA) treatment has revolutionized care of patients with chronic hepatitis C virus infection. Because patients with different liver disease stages have been treated with great success including those awaiting liver transplantation, therapy has been extended to patients with hepatocellular carcinoma as well. From observational studies among compensated cirrhotic hepatitis C patients treated with interferon-containing regimens, it would have been expected that the rate of hepatocellular carcinoma occurrence is markedly decreased after a sustained virological response. However, recently 2 studies have been published reporting markedly increased rates of tumor recurrence and occurrence after viral clearance with DAA agents. Over the last decades, it has been established that chronic antigen stimulation during persistent infection with hepatitis C virus is associated with continuous activation and impaired function of several immune cell populations, such as natural killer cells and virus-specific T cells. This review therefore focuses on recent studies evaluating the restoration of adaptive and innate immune cell populations after DAA therapy in patients with chronic hepatitis C virus infection in the context of the immune responses in hepatocarcinogenesis.",
"title": ""
},
{
"docid": "9a82781af933251208aef5e683839346",
"text": "We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems’ mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems’ optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and the Intel RealSense D400 (formally RS400).",
"title": ""
},
{
"docid": "74beaea9eccab976dc1ee7b2ddf3e4ca",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "bed6312dd677fa37c30e72d0383973ed",
"text": " Fig.1にマスタリーラーニングのアウトラインを示す。 初めに教師はカリキュラムや教材をコンセプトやアイディアが重要であるためレビューする必要がある。 次に教師による診断手段や診断プロセスという形式的評価の計画である。また学習エラーを改善するための Corrective Activitiesの計画の主要な援助でもある。 Corrective Activites 矯正活動にはさまざまな形がとられる。Peer Cross-age Tutoring、コンピュータ支援レッスンなど Enrichment Activities 問題解決練習の特別なtutoringであり、刺激的で早熟な学習者に実りのある学習となっている。 Formative Assesment B もしCorrective Activitiesが学習者を改善しているのならばこの2回目の評価では体得を行っている。 この2回目の評価は学習者に改善されていることや良い学習者になっていることを示し、強力なモチベーショ ンのデバイスとなる。最後は累積的試験または評価の開発がある。",
"title": ""
},
{
"docid": "54af3c39dba9aafd5b638d284fd04345",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
}
] | scidocsrr |
000f74958b907e8493f448a5103ae311 | Assessing and moving on from the dominant project management discourse in the light of project overruns | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "b269bb721ca2a75fd6291295493b7af8",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
}
] | [
{
"docid": "e6804e9bfadec46aa25b7edf86bf04e6",
"text": "An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.",
"title": ""
},
{
"docid": "4bf9ec9d1600da4eaffe2bfcc73ee99f",
"text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.",
"title": ""
},
{
"docid": "8b4e09bb13d3d01d3954f32cbb4c9e27",
"text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "cbaa93c56f770fc9a1fb4b633b8e4a02",
"text": "Jpred (http://www.compbio.dundee.ac.uk/jpred) is a secondary structure prediction server powered by the Jnet algorithm. Jpred performs over 1000 predictions per week for users in more than 50 countries. The recently updated Jnet algorithm provides a three-state (alpha-helix, beta-strand and coil) prediction of secondary structure at an accuracy of 81.5%. Given either a single protein sequence or a multiple sequence alignment, Jpred derives alignment profiles from which predictions of secondary structure and solvent accessibility are made. The predictions are presented as coloured HTML, plain text, PostScript, PDF and via the Jalview alignment editor to allow flexibility in viewing and applying the data. The new Jpred 3 server includes significant usability improvements that include clearer feedback of the progress or failure of submitted requests. Functional improvements include batch submission of sequences, summary results via email and updates to the search databases. A new software pipeline will enable Jnet/Jpred to continue to be updated in sync with major updates to SCOP and UniProt and so ensures that Jpred 3 will maintain high-accuracy predictions.",
"title": ""
},
{
"docid": "8ebb412ce5ded7393daf98a62bc41792",
"text": "It has recently been reported that dogs affected by canine heartworm disease (Dirofilaria immitis) can show an increase in plasma levels of myoglobin and cardiac troponin I, two markers of muscle/myocardial injury. In order to determine if this increase is due to myocardial damage, the right ventricle of 24 naturally infected dogs was examined by routine histology and immunohistochemistry with anti-myoglobin and anti-cardiac troponin I antibodies. Microscopic lesions included necrosis and myocyte vacuolization, and were associated with loss of staining for one or both proteins. Results confirm that increased levels of myoglobin and cardiac troponin I are indicative of myocardial damage in dogs affected by heartworm disease.",
"title": ""
},
{
"docid": "da5ad61c492419515e8449b435b42e80",
"text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "3e57e054e659f78d6bc88de7915b0d85",
"text": "While some unmanned aerial vehicles (UAVs) have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. As a result many UAVs rely on fixed cameras to provide a video stream to an operator or observer. With a fixed camera, the video stream is often unsteady due to the multirotor's movement from wind and acceleration. These video streams are often analyzed by both humans and machines, and the unwanted camera movement can cause problems for both. For a human observer, unwanted movement may simply make it harder to follow the video, while for computer algorithms, it may severely impair the algorithm's intended function. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. We believe, however, that this process could be greatly simplified by using data from a UAV's on-board inertial measurement unit (IMU) to stabilize the camera feed. In this paper we present an algorithm for video stabilization based only on IMU data from a UAV platform. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power.",
"title": ""
},
{
"docid": "6f2aff1eb092fffc80aaf26e3c1877ca",
"text": "With the advent of social networks and micro-blogging systems, the way of communicating with other people and spreading information has changed substantially. Persons with different backgrounds, age and education exchange information and opinions, spanning various domains and topics, and have now the possibility to directly interact with popular users and authoritative information sources usually unreachable before the advent of these environments. As a result, the mechanism of information propagation changed deeply, the study of which is indispensable for the sake of understanding the evolution of information networks. To cope up with this intention, in this paper, we propose a novel model which enables to delve into the spread of information over a social network along with the change in the user relationships with respect to the domain of discussion. For this, considering Twitter as a case study, we aim at analyzing the multiple paths the information follows over the network with the goal of understanding the dynamics of the information contagion with respect to the change of the topic of discussion. We then provide a method for estimating the influence among users by evaluating the nature of the relationship among them with respect to the topic of discussion they share. Using a vast sample of the Twitter network, we then present various experiments that illustrate our proposal and show the efficacy of the proposed approach in modeling this information spread.",
"title": ""
},
{
"docid": "9c67049b5f934b47346592b73bc57dbe",
"text": "In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.",
"title": ""
},
{
"docid": "77cea98467305b9b3b11de8d3cec6ec2",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "fe79c1c71112b3b40e047db6030aaff9",
"text": "We are at a key juncture in history where biodiversity loss is occurring daily and accelerating in the face of population growth, climate change, and rampant development. Simultaneously, we are just beginning to appreciate the wealth of human health benefits that stem from experiencing nature and biodiversity. Here we assessed the state of knowledge on relationships between human health and nature and biodiversity, and prepared a comprehensive listing of reported health effects. We found strong evidence linking biodiversity with production of ecosystem services and between nature exposure and human health, but many of these studies were limited in rigor and often only correlative. Much less information is available to link biodiversity and health. However, some robust studies indicate that exposure to microbial biodiversity can improve health, specifically in reducing certain allergic and respiratory diseases. Overall, much more research is needed on mechanisms of causation. Also needed are a reenvisioning of land-use planning that places human well-being at the center and a new coalition of ecologists, health and social scientists and planners to conduct research and develop policies that promote human interaction with nature and biodiversity. Improvements in these areas should enhance human health and ecosystem, community, as well as human resilience. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "bf499e8252cac48cdd406699c8413e16",
"text": "Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a method which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph where edges encode relations between different mentions (e.g., withinand cross-document co-references). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on the WIKIHOP dataset (Welbl et al., 2017).",
"title": ""
},
{
"docid": "79685eeb67edbb3fbb6e6340fac420c3",
"text": "Fatma Özcan IBM Almaden Research Center San Jose, CA [email protected] Nesime Tatbul Intel Labs and MIT Cambridge, MA [email protected] Daniel J. Abadi Yale University New Haven, CT [email protected] Marcel Kornacker Cloudera San Francisco, CA [email protected] C Mohan IBM Almaden Research Center San Jose, CA [email protected] Karthik Ramasamy Twitter, Inc. San Francisco, CA [email protected] Janet Wiener Facebook, Inc. Menlo Park, CA [email protected]",
"title": ""
},
{
"docid": "a95b95792bf27000b64a5ef6546806d6",
"text": "Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives—optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.",
"title": ""
},
{
"docid": "3741bbaf5cb1b5be943a14eca49554fa",
"text": "Code-mixing is a linguistic phenomenon where multiple languages are used in the same occurrence that is increasingly common in multilingual societies. Codemixed content on social media is also on the rise, prompting the need for tools to automatically understand such content. Automatic Parts-of-Speech (POS) tagging is an essential step in any Natural Language Processing (NLP) pipeline, but there is a lack of annotated data to train such models. In this work, we present a unique language tagged and POS-tagged dataset of code-mixed English-Hindi tweets related to five incidents in India that led to a lot of Twitter activity. Our dataset is unique in two dimensions: (i) it is larger than previous annotated datasets and (ii) it closely resembles typical real-world tweets. Additionally, we present a POS tagging model that is trained on this dataset to provide an example of how this dataset can be used. The model also shows the efficacy of our dataset in enabling the creation of codemixed social media POS taggers.",
"title": ""
},
{
"docid": "29a13944cf4f43ef484512d978396c1e",
"text": "The literature examining the relationship between cardiorespiratory fitness and the brain in older adults has increased rapidly, with 30 of 34 studies published since 2008. Here we review cross-sectional and exercise intervention studies in older adults examining the relationship between cardiorespiratory fitness and brain structure and function, typically assessed using Magnetic Resonance Imaging (MRI). Studies of patients with Alzheimer's disease are discussed when available. The structural MRI studies revealed a consistent positive relationship between cardiorespiratory fitness and brain volume in cortical regions including anterior cingulate, lateral prefrontal, and lateral parietal cortex. Support for a positive relationship between cardiorespiratory fitness and medial temporal lobe volume was less consistent, although evident when a region-of-interest approach was implemented. In fMRI studies, cardiorespiratory fitness in older adults was associated with activation in similar regions as those identified in the structural studies, including anterior cingulate, lateral prefrontal, and lateral parietal cortex, despite heterogeneity among the functional tasks implemented. This comprehensive review highlights the overlap in brain regions showing a positive relationship with cardiorespiratory fitness in both structural and functional imaging modalities. The findings suggest that aerobic exercise and cardiorespiratory fitness contribute to healthy brain aging, although additional studies in Alzheimer's disease are needed.",
"title": ""
},
{
"docid": "1d964bb1b82e6de71a6407967a8d9fa0",
"text": "Ensuring reliable access to clean and affordable water is one of the greatest global challenges of this century. As the world's population increases, water pollution becomes more complex and difficult to remove, and global climate change threatens to exacerbate water scarcity in many areas, the magnitude of this challenge is rapidly increasing. Wastewater reuse is becoming a common necessity, even as a source of potable water, but our separate wastewater collection and water supply systems are not designed to accommodate this pressing need. Furthermore, the aging centralized water and wastewater infrastructure in the developed world faces growing demands to produce higher quality water using less energy and with lower treatment costs. In addition, it is impractical to establish such massive systems in developing regions that currently lack water and wastewater infrastructure. These challenges underscore the need for technological innovation to transform the way we treat, distribute, use, and reuse water toward a distributed, differential water treatment and reuse paradigm (i.e., treat water and wastewater locally only to the required level dictated by the intended use). Nanotechnology offers opportunities to develop next-generation water supply systems. This Account reviews promising nanotechnology-enabled water treatment processes and provides a broad view on how they could transform our water supply and wastewater treatment systems. The extraordinary properties of nanomaterials, such as high surface area, photosensitivity, catalytic and antimicrobial activity, electrochemical, optical, and magnetic properties, and tunable pore size and surface chemistry, provide useful features for many applications. These applications include sensors for water quality monitoring, specialty adsorbents, solar disinfection/decontamination, and high performance membranes. More importantly, the modular, multifunctional and high-efficiency processes enabled by nanotechnology provide a promising route both to retrofit aging infrastructure and to develop high performance, low maintenance decentralized treatment systems including point-of-use devices. Broad implementation of nanotechnology in water treatment will require overcoming the relatively high costs of nanomaterials by enabling their reuse and mitigating risks to public and environmental health by minimizing potential exposure to nanoparticles and promoting their safer design. The development of nanotechnology must go hand in hand with environmental health and safety research to alleviate unintended consequences and contribute toward sustainable water management.",
"title": ""
},
{
"docid": "a117e006785ab63ef391d882a097593f",
"text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.",
"title": ""
},
{
"docid": "d20154a6b20e07bc3e13cd74731c1b39",
"text": "Stability in cluster analysis is strongly dependent on the data set, especially on how well separated and how homogeneous the clusters are. In the same clustering, some clusters may be very stable and others may be extremely unstable. The Jaccard coefficient, a similarity measure between sets, is used as a clusterwise measure of cluster stability, which is assessed by the bootstrap distribution of the Jaccard coefficient for every single cluster of a clustering compared to the most similar cluster in the bootstrapped data sets. This can be applied to very general cluster analysis methods. Some alternative resampling methods are investigated as well, namely subsetting, jittering the data points and replacing some data points by artificial noise points. The different methods are compared by means of a simulation study. A data example illustrates the use of the cluster-wise stability assessment to distinguish between meaningful stable and spurious clusters, but it is also shown that clusters are sometimes only stable because of the inflexibility of certain clustering methods.",
"title": ""
},
{
"docid": "6008de061d02515a46b7ba924e5d5741",
"text": "The purpose of this article is to introduce evidence-based concepts and demonstrate how to find valid evidence to answer clinical questions. Evidence-based decision making (EBDM) requires understanding new concepts and developing new skills including how to: ask good clinical questions, conduct a computerized search, critically appraise the evidence, apply the results in clinical practice, and evaluate the process. This approach recognizes that clinicians can never be completely current with all conditions, medications, materials, or available products. Thus EBDM provides a mechanism for addressing these gaps in knowledge in order to provide the best care possible. In Part 1, a case scenario demonstrates the application of the skills involved in structuring a clinical question and conducting an online search using PubMed. Practice tips are provided along with online resources related to the evidence-based process.",
"title": ""
}
] | scidocsrr |
4d7263c763fab9f348f4ebae3faa47fb | BackFi: High Throughput WiFi Backscatter | [
{
"docid": "e30cedcb4cb99c4c3b2743c5359cf823",
"text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.",
"title": ""
}
] | [
{
"docid": "bf84e66bab43950f0d4d8c2d465b907e",
"text": "Paraphrases are sentences or phrases that convey the same meaning using different wording. Although the logical definition of paraphrases requires strict semantic equivalence, linguistics accepts a broader, approximate, equivalence—thereby allowing far more examples of “quasi-paraphrase.” But approximate equivalence is hard to define. Thus, the phenomenon of paraphrases, as understood in linguistics, is difficult to characterize. In this article, we list a set of 25 operations that generate quasi-paraphrases. We then empirically validate the scope and accuracy of this list by manually analyzing random samples of two publicly available paraphrase corpora. We provide the distribution of naturally occurring quasi-paraphrases in English text.",
"title": ""
},
{
"docid": "33789f718bc299fa63762f72595dcd77",
"text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.",
"title": ""
},
{
"docid": "045a4622691d1ae85593abccb823b193",
"text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).",
"title": ""
},
{
"docid": "8273a154d8e8b94873c4c94c4ff6ed14",
"text": "The ambitious goals set for 5G wireless networks, which are expected to be introduced around 2020, require dramatic changes in the design of different layers for next generation communications systems. Massive MIMO systems, filter bank multi-carrier modulation, relaying technologies, and millimeter-wave communications have been considered as some of the strong candidates for the physical layer design of 5G networks. In this article, we shed light on the potential and implementation of IM techniques for MIMO and multi-carrier communications systems, which are expected to be two of the key technologies for 5G systems. Specifically, we focus on two promising applications of IM: spatial modulation and orthogonal frequency-division multiplexing with IM, and discuss the recent advances and future research directions in IM technologies toward spectrum- and energy-efficient 5G wireless networks.",
"title": ""
},
{
"docid": "3adf8510887ff9e5c7a270e16dcdec9a",
"text": "This paper analyzes the Sampled Value (SV) Process Bus concept that was recently introduced by the IEC 61850-9-2 standard. This standard proposes that the Current and Voltage Transformer (CT, PT) outputs that are presently hard wired to various devices (relays, meters, IED, and SCADA) be digitized at the source and then communicated to those devices using an Ethernet-Based Local Area Network (LAN). The approach is especially interesting for modern optical CT/PT devices that possess high quality information about the primary voltage/current waveforms, but are often forced to degrade output signal accuracy in order to meet traditional analog interface requirements (5 A/120 V). While very promising, the SV-based process bus brings along a distinct set of issues regarding the overall reliability of the new Ethernet communications-based protection and control system. This paper looks at the Merging Unit Concept, analyzes the protection system reliability in the process bus environment, and proposes an alternate approach that can be used to successfully deploy this technology. Multiple scenarios used with the associated equipment configurations are compared. Additional issues that need to be addressed by various standards bodies and interoperability challenges posed by the SV process bus LAN on real-time monitoring and control applications (substation HMI, SCADA, engineering access) are also identified.",
"title": ""
},
{
"docid": "04065494023ed79211af3ba0b5bc4c7e",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "09fa74b0a83e040beb5612e6eeb4089c",
"text": "Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.",
"title": ""
},
{
"docid": "6325188ee21b6baf65dbce6855c19bc2",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
},
{
"docid": "cc5815edf96596a1540fa1fca53da0d3",
"text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.",
"title": ""
},
{
"docid": "93076fee7472e1a89b2b3eb93cff4737",
"text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.",
"title": ""
},
{
"docid": "b44b177f50402015e343e78afe4d7523",
"text": "A design of a novel wireless implantable blood pressure sensing microsystem for advanced biological research is presented. The system employs a miniature instrumented elastic cuff, wrapped around a blood vessel, for small laboratory animal real-time blood pressure monitoring. The elastic cuff is made of biocompatible soft silicone material by a molding process and is filled by insulating silicone oil with an immersed MEMS capacitive pressure sensor interfaced with low-power integrated electronic system. This technique avoids vessel penetration and substantially minimizes vessel restriction due to the soft cuff elasticity, and is thus attractive for long-term implant. The MEMS pressure sensor detects the coupled blood pressure waveform caused by the vessel expansion and contraction, followed by amplification, 11-bit digitization, and wireless FSK data transmission to an external receiver. The integrated electronics are designed with capability of receiving RF power from an external power source and converting the RF signal to a stable 2 V DC supply in an adaptive manner to power the overall implant system, thus enabling the realization of stand-alone batteryless implant microsystem. The electronics are fabricated in a 1.5 μm CMOS process and occupy an area of 2 mm × 2 mm. The prototype monitoring cuff is wrapped around the right carotid artery of a laboratory rat to measure real-time blood pressure waveform. The measured in vivo blood waveform is compared with a reference waveform recorded simultaneously using a commercial catheter-tip transducer inserted into the left carotid artery. The two measured waveforms are closely matched with a constant scaling factor. The ASIC is interfaced with a 5-mm-diameter RF powering coil with four miniature surface-mounted components (one inductor and three capacitors) over a thin flexible substrate by bond wires, followed by silicone coating and packaging with the prototype blood pressure monitoring cuff. The overall system achieves a measured average sensitivity of 7 LSB/ mmHg, a nonlinearity less than 2.5% of full scale, and a hysteresis less than 1% of full scale. From noise characterization, a blood vessel pressure change sensing resolution 328 of 1 mmHg can be expected. The system weighs 330 mg, representing an order of magnitude mass reduction compared with state-of-the-art commercial technology.",
"title": ""
},
{
"docid": "eba25ae59603328f3ef84c0994d46472",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "63198927563faa609e6520a01a56b20c",
"text": "A 1.2 V 4 Gb DDR4 SDRAM is presented in a 30 nm CMOS technology. DDR4 SDRAM is developed to raise memory bandwidth with lower power consumption compared with DDR3 SDRAM. Various functions and circuit techniques are newly adopted to reduce power consumption and secure stable transaction. First, dual error detection scheme is proposed to guarantee the reliability of signals. It is composed of cyclic redundancy check (CRC) for DQ channel and command-address (CA) parity for command and address channel. For stable reception of high speed signals, a gain enhanced buffer and PVT tolerant data fetch scheme are adopted for CA and DQ respectively. To reduce the output jitter, the type of delay line is selected depending on data rate at initial stage. As a result, test measurement shows 3.3 Gb/s DDR operation at 1.14 V.",
"title": ""
},
{
"docid": "a01302cad4754ecf162d485e00c72e38",
"text": "The problem of creating fair ship design curves is of major importance in Computer Aided Ship Design environment. The fairness of these curves is generally considered a subjective notion depending on the judgement of the designer (eg., visually pleasing, minimum variation of curvature, devoid of unnecessary bumps or wiggles, satisfying certain continuity requirements). Thus an automated fairing process based on objective criteria is clearly desirable. This paper presents an automated fairing algorithm for ship curves to satisfy objective geometric constraints. This procedure is based on the use of optimisation tools and cubic B-spline functions. The aim is to produce curves with a more gradual variation of curvature without deteriorating initial shapes. The optimisation based fairing procedure is applied to a variety of plane ship sections to demonstrate the capability and flexibility of the methodology. The resulting curves, with their corresponding curvature plots indicate that, provided that the designer can specify his objectives and constraints clearly, the procedure will generate fair ship definition curves within the constrained design space.",
"title": ""
},
{
"docid": "567d165eb9ad5f9860f3e0602cbe3e03",
"text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.",
"title": ""
},
{
"docid": "57e50c15b3107a473f5fb74472b74fcc",
"text": "PURPOSE\nThe purpose of this article is to provide an overview of our previous work on roll-over shapes, which are the effective rocker shapes that the lower limb systems conform to during walking.\n\n\nMETHOD\nThis article is a summary of several recently published articles from the Northwestern University Prosthetics Research Laboratory and Rehabilitation Engineering Research Program on the topic of roll-over shapes. The roll-over shape is a measurement of centre of pressure of the ground reaction force in body-based coordinates. This measurement is interpreted as the effective rocker shape created by lower limb systems during walking.\n\n\nRESULTS\nOur studies have shown that roll-over shapes in able-bodied subjects do not change appreciably for conditions of level ground walking, including walking at different speeds, while carrying different amounts of weight, while wearing shoes of different heel heights, or when wearing shoes with different rocker radii. In fact, results suggest that able-bodied humans will actively change their ankle movements to maintain the same roll-over shapes.\n\n\nCONCLUSIONS\nThe consistency of the roll-over shapes to level surface walking conditions has provided insight for design, alignment and evaluation of lower limb prostheses and orthoses. Changes to ankle-foot and knee-ankle-foot roll-over shapes for ramp walking conditions have suggested biomimetic (i.e. mimicking biology) strategies for adaptable ankle-foot prostheses and orthoses.",
"title": ""
},
{
"docid": "60fb532b3d22b5f598a0aebabc616de4",
"text": "Introduction Vision is the primary sensory modality for humans—and most other mammals—by which they perceive the world. In humans, vision-related areas occupy about 30% of the neocortex. Light rays are projected upon the retina, and the brain tries to make sense of the world by means of interpreting the visual input pattern. The sensitivity and specificity with which the brain solves this computationally complex problem cannot yet be replicated on a computer. The most imposing of these problems is that of invariant visual pattern recognition. Recently it has been said that the prediction of future sensory input from salient features of current input is the keystone of intelligence. The neocortex is the structure in the brain which is assumed to be responsible for the evolution of intelligence. Current sensory input patterns activate stored traces of previous inputs which then generate top-down expectations, which are verified against the bottom-up input signals. If the verification succeeds, the predicted pattern is recognised. This theory explains how humans, and mammals in general, can recognise images despite changes in location, size and lighting conditions, and in the presence of deformations and large amounts of noise. Parts of this theory, known as the memory-prediction theory (MPT), are modelled in the Hierarchical Temporal Memory or HTM technology developed by a company called Numenta; the model is an attempt to replicate the structural and algorithmic properties of the neocortex. Spatial and temporal relations between features of the sensory signals are formed in an hierarchical memory architecture during a learning process. When a new pattern arrives, the recognition process can be viewed as choosing the stored representation that best predicts the pattern. Hierarchical Temporal Memory has been successfully applied to the recognition of relatively simple images, showing invariance across several transformations and robustness with respect to noisy patterns. We have applied the concept of HTM, as implemented by Numenta, to land-use recognition, by building and testing a system to learn to recognise five different types of land use. Overview of the HTM learning algorithm Hierarchical Temporal Memory can be considered a form of a Bayesian network, where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input, through a process of finding common spatial patterns and then detecting common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data and afford mechanisms for covert attention. Sensory data are presented at the bottom of the hierarchy. To train an HTM, it is necessary to present continuous, time-varying, sensory inputs while the causes underlying the same sensory data persist in the environment. In other words, you either move the senses of the HTM through the world, or the objects in the world move relative to the HTM’s senses. Time is the fundamental component of an HTM, and can be thought of as a learning supervisor. Hierarchical Temporal Memory networks are made of nodes; each node receives as input a temporal sequence of patterns. The goal of each node is to group input patterns that are likely to have the same cause, thereby forming invariant representations of extrinsic causes. An HTM node uses two grouping mechanisms to form invariants (Fig. 1). The first mechanism is called spatial pooling, in which raw data are received by the sensor; spatial poolers of higher nodes receive the outputs from their child nodes. The input of the spatial pooler in higher layers is the fixed-order concatenation of the output of its children. This input is represented by row vectors, and the role of the spatial pooler is to build a matrix (the coincidence matrix) from input vectors that occur frequently. There are multiple spatial pooler algorithms, e.g. Gaussian and Product. The Gaussian spatial pooler algorithm is used for nodes at the input layer, whereas the nodes higher up the hierarchy use the Product spatial pooler. The Gaussian spatial pooler algorithm compares the raw input vectors with the existing coincidences in the coincidence matrix. If the Euclidean distance between an input vector and an existing coincidence is small enough, the input is considered to be the same coincidence, and the count for that coincidence is incremented and stored in memory. 370 South African Journal of Science 105, September/October 2009 Research Articles",
"title": ""
},
{
"docid": "b3f2c1736174eda75f7eedb3cee2a729",
"text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.",
"title": ""
},
{
"docid": "19a538b6a49be54b153b0a41b6226d1f",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
},
{
"docid": "b36e893afa63ed246c7bf18139eb147e",
"text": "It is widely believed that the employee participation may affect employee’s job satisfaction; employee productivity, employee commitment and they all can create comparative advantage for the organization. The main intention of this study was to find out relationship among employee participation, job satisfaction, employee productivity and employee commitment. For the matter 34 organizations from Oil & Gas, Banking and Telecommunication sectors were contacted, of which 15 responded back. The findings of this study are that employee participation not only an important determinant of job satisfaction components. Increasing employee participation will have a positive effect on employee’s job satisfaction, employee commitment and employee productivity. Naturally increasing employee participation is a long-term process, which demands both attention from management side and initiative from the employee side.",
"title": ""
}
] | scidocsrr |
1ef17b08bba3731e8b0724c26e87707e | A Fine-Grained Performance Model of Cloud Computing Centers | [
{
"docid": "807cd6adc45a2adb7943c5a0fb5baa94",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
}
] | [
{
"docid": "7ca908e7896afc49a0641218e1c4febf",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "8abd03202f496de4bec6270946d53a9c",
"text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.",
"title": ""
},
{
"docid": "6d8e78d8c48aab17aef0b9e608f13b99",
"text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.",
"title": ""
},
{
"docid": "383e569dcd1f0c648ad2274588f76961",
"text": "BACKGROUND\nOutcomes are poor for patients with previously treated, advanced or metastatic non-small-cell lung cancer (NSCLC). The anti-programmed death ligand 1 (PD-L1) antibody atezolizumab is clinically active against cancer, including NSCLC, especially cancers expressing PD-L1 on tumour cells, tumour-infiltrating immune cells, or both. We assessed efficacy and safety of atezolizumab versus docetaxel in previously treated NSCLC, analysed by PD-L1 expression levels on tumour cells and tumour-infiltrating immune cells and in the intention-to-treat population.\n\n\nMETHODS\nIn this open-label, phase 2 randomised controlled trial, patients with NSCLC who progressed on post-platinum chemotherapy were recruited in 61 academic medical centres and community oncology practices across 13 countries in Europe and North America. Key inclusion criteria were Eastern Cooperative Oncology Group performance status 0 or 1, measurable disease by Response Evaluation Criteria In Solid Tumors version 1.1 (RECIST v1.1), and adequate haematological and end-organ function. Patients were stratified by PD-L1 tumour-infiltrating immune cell status, histology, and previous lines of therapy, and randomly assigned (1:1) by permuted block randomisation (with a block size of four) using an interactive voice or web system to receive intravenous atezolizumab 1200 mg or docetaxel 75 mg/m(2) once every 3 weeks. Baseline PD-L1 expression was scored by immunohistochemistry in tumour cells (as percentage of PD-L1-expressing tumour cells TC3≥50%, TC2≥5% and <50%, TC1≥1% and <5%, and TC0<1%) and tumour-infiltrating immune cells (as percentage of tumour area: IC3≥10%, IC2≥5% and <10%, IC1≥1% and <5%, and IC0<1%). The primary endpoint was overall survival in the intention-to-treat population and PD-L1 subgroups at 173 deaths. Biomarkers were assessed in an exploratory analysis. We assessed safety in all patients who received at least one dose of study drug. This study is registered with ClinicalTrials.gov, number NCT01903993.\n\n\nFINDINGS\nPatients were enrolled between Aug 5, 2013, and March 31, 2014. 144 patients were randomly allocated to the atezolizumab group, and 143 to the docetaxel group. 142 patients received at least one dose of atezolizumab and 135 received docetaxel. Overall survival in the intention-to-treat population was 12·6 months (95% CI 9·7-16·4) for atezolizumab versus 9·7 months (8·6-12·0) for docetaxel (hazard ratio [HR] 0·73 [95% CI 0·53-0·99]; p=0·04). Increasing improvement in overall survival was associated with increasing PD-L1 expression (TC3 or IC3 HR 0·49 [0·22-1·07; p=0·068], TC2/3 or IC2/3 HR 0·54 [0·33-0·89; p=0·014], TC1/2/3 or IC1/2/3 HR 0·59 [0·40-0·85; p=0·005], TC0 and IC0 HR 1·04 [0·62-1·75; p=0·871]). In our exploratory analysis, patients with pre-existing immunity, defined by high T-effector-interferon-γ-associated gene expression, had improved overall survival with atezolizumab. 11 (8%) patients in the atezolizumab group discontinued because of adverse events versus 30 (22%) patients in the docetaxel group. 16 (11%) patients in the atezolizumab group versus 52 (39%) patients in the docetaxel group had treatment-related grade 3-4 adverse events, and one (<1%) patient in the atezolizumab group versus three (2%) patients in the docetaxel group died from a treatment-related adverse event.\n\n\nINTERPRETATION\nAtezolizumab significantly improved survival compared with docetaxel in patients with previously treated NSCLC. Improvement correlated with PD-L1 immunohistochemistry expression on tumour cells and tumour-infiltrating immune cells, suggesting that PD-L1 expression is predictive for atezolizumab benefit. Atezolizumab was well tolerated, with a safety profile distinct from chemotherapy.\n\n\nFUNDING\nF Hoffmann-La Roche/Genentech Inc.",
"title": ""
},
{
"docid": "e632dfe8a37846339ceb44ae4f406a1a",
"text": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users’ queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun’s approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.",
"title": ""
},
{
"docid": "1ae3eb81ae75f6abfad4963ee0056be5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "0e9c280e39dbad16cf7bbf961ed4bdb1",
"text": "This paper reviews the state-of-the-art research on multi-robot systems, with a focus on multi-robot cooperation and coordination. By primarily classifying multi-robot systems into active and passive cooperative systems, three main research topics of multi-robot systems are focused on: task allocation, multi-sensor fusion and localization. In addition, formation control and coordination methods for multi-robots are reviewed.",
"title": ""
},
{
"docid": "26c58183e71f916f37d67f1cf848f021",
"text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.",
"title": ""
},
{
"docid": "1d3b0669eda182f312a0a77d4bccf373",
"text": "CONTEXT\nMedical issues are widely reported in the mass media. These reports influence the general public, policy makers and health-care professionals. This information should be valid, but is often criticized for being speculative, inaccurate and misleading. An understanding of the obstacles medical reporters meet in their work can guide strategies for improving the informative value of medical journalism.\n\n\nOBJECTIVE\nTo investigate constraints on improving the informative value of medical reports in the mass media and elucidate possible strategies for addressing these.\n\n\nDESIGN\nWe reviewed the literature and organized focus groups, a survey of medical journalists in 37 countries, and semi-structured telephone interviews.\n\n\nRESULTS\nWe identified nine barriers to improving the informative value of medical journalism: lack of time, space and knowledge; competition for space and audience; difficulties with terminology; problems finding and using sources; problems with editors and commercialism. Lack of time, space and knowledge were the most common obstacles. The importance of different obstacles varied with the type of media and experience. Many health reporters feel that it is difficult to find independent experts willing to assist journalists, and also think that editors need more education in critical appraisal of medical news. Almost all of the respondents agreed that the informative value of their reporting is important. Nearly everyone wanted access to short, reliable and up-to-date background information on various topics available on the Internet. A majority (79%) was interested in participating in a trial to evaluate strategies to overcome identified constraints.\n\n\nCONCLUSIONS\nMedical journalists agree that the validity of medical reporting in the mass media is important. A majority acknowledge many constraints. Mutual efforts of health-care professionals and journalists employing a variety of strategies will be needed to address these constraints.",
"title": ""
},
{
"docid": "c1978e4936ed5bda4e51863dea7e93ee",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "99efebd647fa083fab4e0f091b0b471b",
"text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "f90fcd27a0ac4a22dc5f229f826d64bf",
"text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.",
"title": ""
},
{
"docid": "b12f1b1ff7618c1f54462c18c768dae8",
"text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.",
"title": ""
},
{
"docid": "9b1cf7cb855ba95693b90efacc34ac6d",
"text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.",
"title": ""
},
{
"docid": "0d4ab4099b3293286cafaf260d5a8114",
"text": "This exploratory research investigates how students and professionals use social network sites (SNSs) in the setting of developing and emerging countries. Data collection included focus groups consisting of medical students and faculty as well as the analysis of a Facebook site centred on medical and clinical topics. The findings show how users, both students and professionals, appropriate social network sites from their mobile phones as rich educational tools in informal learning contexts. First, unlike in previous studies, the analysis revealed explicit forms of educational content embedded in Facebook, such as quizzes and case presentations and associated deliberate (e-)learning practices which are typically found in (more) formal educational settings. Second, from a socio-cultural learning perspective, it is shown how the participation in such virtual professional communities across national boundaries permits the announcement and negotiation of occupational status and professional identities. Introduction and background Technologies for development and health in \"resource-limited\" environments Technological innovations have given hope that new ICT tools will result in the overall progress and well-being of developing countries, in particular with respect to health and education services. Great expectations are attached to the spread of mobile communication technologies. The number of mobile cellular subscriptions worldwide is currently 4.7 billion and increasing. This includes people in remote and rural areas and \"resource-limited\" settings (The World Bank, 2011). To a much lesser extent there is also a discussion on affordances of social network sites (SNSs) in such contexts (Marcelo, Adejumo, & Luna, 2011). Discourses and projects on ICT(4)D (information technology for development) or mHealth (mobile technology for health) tend to be based on techno-centric and deterministic approaches where learning materials, either software or hardware, are distributed by central authorities or knowledge is \"delivered\" according to \"push-strategies\"; or, using the words of Traxler, information is pumped through the infrastructure, often in \"educationally naïve\" ways (in press). Similarly, the main direction of techno-centric and transmissional approaches appears to be from developed to \"developing\" countries, respectively from experts to novices. In spite of all efforts the situation is still problematic and ambitious visions have been only realised to a limited extent. For example, the goal of providing every person worldwide with access to an informed and educated healthcare provider by 2015 is unlikely to be realised. In particular, little progress has been made in meeting the information needs of frontline healthcare providers and ordinary citizens in low resource settings (Smith & Koehlmoos, 2011). Very often it is basic knowledge that is needed, related for example to the treatment of childhood pneumonia or diarrhoea, which cannot be accessed by healthcare providers such as family caregivers or health workers (HIFA Report, 2010). With this research we attempt to shed light on aspects of technology use, such as engagement with SNSs and mobile phones, in the context of health education in developing countries which, we would argue, have been widely neglected. In doing so, we hope to contribute to the academic discourses on SNSs and mobile learning. Since our approach follows the principles of case study research, the remainder of this paper is structured as follows. We continue with a brief and, admittedly, selective characterization of two underlying academic discourses that can inform this research, namely mobile learning and research on SNSs. After presenting our methodological approach and results we discuss the findings in the light of multiple theoretical concepts and empirical studies from these fields. We conclude with some practical considerations, limitations and directions for further research. Educational discourses on mobile learning and social network sites In the field of mobile learning, a small, yet rapidly growing research community, recent work has considered the (educational) use of mobile phones as an appropriation of cultural resources (Pachler, Cook, & Bachmair, 2010). In contrast to the classical binary and quantitative model of adoption, appropriation is centred on the question of how people use mobile phones once they have adopted them (Wirth, Von Pape, & Karnowski, 2008). Researchers define appropriation as the emerging \"processes of the internalization of the pre-given world of cultural products\" by the engagement of learners in the form of social practices with particular settings inside and outside of formal educational settings (Pachler, et al., 2010). While mobile learning research tend to focus on learning in schools, universities, workplaces or on life-long learning in industrialised countries (Frohberg, Göth, & Schwabe, 2009; Pachler, Pimmer, & Seipold, 2011; Pimmer, Pachler, & Attwell, 2010), some attention has also been paid to developing countries (see for example Traxler & Kukulska-Hulme, 2005). Research on SNSs is becoming increasingly popular not only in industrialised nations (boyd & Ellison, 2007) but, to a lesser extent, also in developing countries (Kolko, Rose, & Johnson, 2007). Increasing importance is attached to educational aspects of SNSs (Selwyn, 2009), though there is relatively little theoretical and empirical attention paid by social researchers to the form and nature of that learning in general (Merchant, 2011). Socio-cultural approaches to learning in general, and to social networks and mobile learning in particular are based on the notions of participation, belonging, communities and identity construction. It was suggested, for example, that such networks create a \"sense of place in a social world\" (Merchant, 2011) and can be considered as \"multi-audience identity production sites\" (Zhao, Grasmuck, & Martin, 2008). By documenting daily episodes by means of mobiles and social networks, such tools are said to contribute to the formation of (multiple) identities related to the live-worlds of users. In this sense, learning is considered as situated meaning-making and identity formation (Pachler, et al., 2010). The influence of SNSs on community practices was also discussed. An empirical study suggested, for example, that social network sites helped maintain relations as people move across different offline communities (Ellison, Steinfield, & Lampe, 2007). Also in formal educational environments, when social networks were deliberately used in order to support classroom-based teaching and learning, (unintended) community building was observed (Arnold & Paulus, 2010). However, research has little to say with respect to vocational and professional aspects of the use of SNSs. One study reported that a company's internal social network site supported professionals in building stronger relations with their weak ties and in getting in touch with professionals they did not know before (DiMicco et al., 2008). Another study that observed the use of mobiles and social software for the compilation of e-portfolios witnessed influences on identity trajectory according to the concepts of belonging to a workplace, becoming and then being a professional (Chan, 2011).",
"title": ""
},
{
"docid": "c56831d181d70ad663a5430092ee8978",
"text": "1Student, Department of Computer Science & Engineering, G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. 2Assistant Professor, Department of Information and Technology , G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. Therefore a two-stage enhanced web crawler framework is proposed for efficiently harvesting deep web interfaces. The proposed enhanced web crawler is divided into two stages. In the first stage, site locating is performed by using reverse searching which finds relevant content. In the second stage, enhanced web crawler achieves fast in site searching by excavating most relevant links of site. It uses a novel deep web crawling framework based on reinforcement learning which is effective for crawling the deep web. The experimental results show that the method outperforms the state of art methods in terms of crawling capability and achieves higher harvest rates than other crawlers.",
"title": ""
},
{
"docid": "1ca801ec3c0f5c0cbda2061ecd3cbfc0",
"text": "One objective of the French-funded (ANR-2006-SECU-006) ISyCri Project (ISyCri stands for Interoperability of Systems in Crisis situation) is to provide the crisis cell in charge of the situation management with an information system (IS) able to support the interoperability of partners involved in this collaborative situation. Such a system is called Mediation Information System (MIS). This system must be in charge of (i) information exchange, (ii) services sharing and (iii) behavior orchestration. This paper presents the first step of the MIS engineering, the deduction of a collaborative process used to coordinate actors of the crisis cell. Especially, this paper give a formal definition of the deduction rules used to deduce the collaborative process.",
"title": ""
},
{
"docid": "4f52223cb3150b1b7a7079147bcb3bc2",
"text": "MAX NEUENDORF,1 AES Member, MARKUS MULTRUS,1 AES Member, NIKOLAUS RETTELBACH1, GUILLAUME FUCHS1, JULIEN ROBILLIARD1, JÉRÉMIE LECOMTE1, STEPHAN WILDE1, STEFAN BAYER,10 AES Member, SASCHA DISCH1, CHRISTIAN HELMRICH10, ROCH LEFEBVRE,2 AES Member, PHILIPPE GOURNAY2, BRUNO BESSETTE2, JIMMY LAPIERRE,2 AES Student Member, KRISTOFER KJÖRLING3, HEIKO PURNHAGEN,3 AES Member, LARS VILLEMOES,3 AES Associate Member, WERNER OOMEN,4 AES Member, ERIK SCHUIJERS4, KEI KIKUIRI5, TORU CHINEN6, TAKESHI NORIMATSU1, KOK SENG CHONG7, EUNMI OH,8 AES Member, MIYOUNG KIM8, SCHUYLER QUACKENBUSH,9 AES Fellow, AND BERNHARD GRILL1",
"title": ""
},
{
"docid": "fc2a0f6979c2520cee8f6e75c39790a8",
"text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"title": ""
}
] | scidocsrr |
9a1fa0b7b8c2aef8ca0f36c7d5b5bc72 | Insights into deep neural networks for speaker recognition | [
{
"docid": "cd733cb756884a21cfcc9143e425f0f6",
"text": "We propose a novel framework for speaker recognition in which extraction of sufficient statistics for the state-of-the-art i-vector model is driven by a deep neural network (DNN) trained for automatic speech recognition (ASR). Specifically, the DNN replaces the standard Gaussian mixture model (GMM) to produce frame alignments. The use of an ASR-DNN system in the speaker recognition pipeline is attractive as it integrates the information from speech content directly into the statistics, allowing the standard backends to remain unchanged. Improvement from the proposed framework compared to a state-of-the-art system are of 30% relative at the equal error rate when evaluated on the telephone conditions from the 2012 NIST speaker recognition evaluation (SRE). The proposed framework is a successful way to efficiently leverage transcribed data for speaker recognition, thus opening up a wide spectrum of research directions.",
"title": ""
},
{
"docid": "e64f1f11ed113ca91094ef36eaf794a7",
"text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.",
"title": ""
}
] | [
{
"docid": "c14eca26d1dc76a5e533583a56e4bd5d",
"text": "In restorative dentistry, the non-vital tooth and its restoration have been extensively studied from both its structural and esthetic aspects. The restoration of endodontically treated teeth has much in common with modern implantology: both must include multifaceted biological, biomechanical and esthetic considerations with a profound understanding of materials and techniques; both are technique sensitive and both require a multidisciplinary approach. And for both, two fundamental principles from team sports apply well: firstly, the weakest link determines the limits, and secondly, it is a very long way to the top, but a very short way to failure. Nevertheless, there is one major difference: if the tooth fails, there is the option of the implant, but if the implant fails, there is only another implant or nothing. The aim of this essay is to try to answer some clinically relevant conceptual questions and to give some clinical guidelines regarding the reconstructive aspects, based on scientific evidence and clinical expertise.",
"title": ""
},
{
"docid": "c4d1d0d636e23c377473fe631022bef1",
"text": "Electronic concept mapping tools provide a flexible vehicle for constructing concept maps, linking concept maps to other concept maps and related resources, and distributing concept maps to others. As electronic concept maps are constructed, it is often helpful for users to consult additional resources, in order to jog their memories or to locate resources to link to the map under construction. The World Wide Web provides a rich range of resources for these tasks—if the right resources can be found. This paper presents ongoing research on how to automatically generate Web queries from concept maps under construction, in order to proactively suggest related information to aid concept mapping. First, it examines how concept map structure and content can be exploited to automatically select terms to include in initial queries, based on studies of (1) how concept map structure influences human judgments of concept importance, and (2) the relative value of including information from concept labels and linking phrases. Second, it examines how a concept map can be used to refine future queries by reinforcing the weights of terms that have proven to be good discriminators for the topic of the concept map. The described methods are being applied to developing “intelligent suggesters” to support the concept mapping process.",
"title": ""
},
{
"docid": "5a7b68c341e20d5d788e46c089cfd855",
"text": "This study aims at investigating alcoholic inpatients' attachment system by combining a measurement of adult attachment style (AAQ, Hazan and Shaver, 1987. Journal of Personality and Social Psychology, 52(3): 511-524) and the degree of alexithymia (BVAQ, Bermond and Vorst, 1998. Bermond-Vorst Alexithymia Questionnaire, Unpublished data). Data were collected from 101 patients (71 men, 30 women) admitted to a psychiatric hospital in Belgium for alcohol use-related problems, between September 2003 and December 2004. To investigate the research question, cluster analyses and regression analyses are performed. We found that it makes sense to distinguish three subgroups of alcoholic inpatients with different degrees of impairment of the attachment system. Our results also reveal a pattern of correspondence between the severity of psychiatric symptoms-personality disorder traits (ADP-IV), anxiety (STAI), and depression (BDI-II-Nl)-and the severity of the attachment system's impairment. Limitations of the study and suggestions for further research are highlighted and implications for diagnosis and treatment are discussed.",
"title": ""
},
{
"docid": "e85b761664a01273a10819566699bf4f",
"text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.",
"title": ""
},
{
"docid": "78d00cb1af094c91cc7877ba051f925e",
"text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.",
"title": ""
},
{
"docid": "30e47a275e7e00f80c8f12061575ee82",
"text": "Spliddit is a first-of-its-kind fair division website, which offers provably fair solutions for the division of rent, goods, and credit. In this note, we discuss Spliddit's goals, methods, and implementation.",
"title": ""
},
{
"docid": "3a5d43d86d39966aca2d93d1cf66b13d",
"text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.",
"title": ""
},
{
"docid": "6a1fa32d9a716b57a321561dfce83879",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "9d3e0a8af748c9addf598a27f414e0b2",
"text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.",
"title": ""
},
{
"docid": "5064d758b361171310ac31c323aa734b",
"text": "The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information.",
"title": ""
},
{
"docid": "ffbab4b090448de06ff5237d43c5e293",
"text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).",
"title": ""
},
{
"docid": "471db984564becfea70fb2946ef4871e",
"text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.",
"title": ""
},
{
"docid": "9cdc0646b8c057ead7000ec14736fc12",
"text": "This paper presents a multilayer aperture coupled microstrip antenna with a non symmetric U-shaped feed line. The antenna structure consists of a rectangular patch which is excited through two slots on the ground plane. A parametric study is presented on the effects of the position and dimensions of the slots. Results show that the antenna has VSWR < 2 from 2.6 GHz to 5.4 GHz (70%) and the gain of the structure is more than 7 dB from 2.7 GHz to 4.4 GHz (48%).",
"title": ""
},
{
"docid": "f3f70e5ba87399e9d44bda293a231399",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "10ef865d0c70369d64c900fb46a1399d",
"text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.",
"title": ""
},
{
"docid": "c5f0155b2f6ce35a9cbfa38773042833",
"text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.",
"title": ""
},
{
"docid": "362c41e8f90c097160c7785e8b4c9053",
"text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author",
"title": ""
},
{
"docid": "98e392ace28d496dafd83ec962ce00af",
"text": "Continuous-time Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steady-state and transient-state probabilities. This paper introduces a branching temporal logic for expressing real-time probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a time-bounded until operator to express probabilistic timing properties over paths as well as an operator to express steady-state probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steady-state operator) and a Volterra integral equation system (for time-bounded until). We then show that the problem of model-checking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a well-known notion for aggregating CTMCs, preserves the validity of all formulas in the logic.",
"title": ""
},
{
"docid": "0512987d091d29681eb8ba38a1079cff",
"text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.",
"title": ""
}
] | scidocsrr |
a0e24031f03b66cf7151caa726854b22 | Individual differences in executive control relate to metaphor processing: an eye movement study of sentence reading | [
{
"docid": "8230ddd7174a2562c0fe0f83b1bf7cf7",
"text": "Metaphors are fundamental to creative thought and expression. Newly coined metaphors regularly infiltrate our collective vocabulary and gradually become familiar, but it is unclear how this shift from novel to conventionalized meaning happens in the brain. We investigated the neural career of metaphors in a functional magnetic resonance imaging study using extensively normed new metaphors and simulated the ordinary, gradual experience of metaphor conventionalization by manipulating participants' exposure to these metaphors. Results showed that the conventionalization of novel metaphors specifically tunes activity within bilateral inferior prefrontal cortex, left posterior middle temporal gyrus, and right postero-lateral occipital cortex. These results support theoretical accounts attributing a role for the right hemisphere in processing novel, low salience figurative meanings, but also show that conventionalization of metaphoric meaning is a bilaterally-mediated process. Metaphor conventionalization entails a decreased neural load within semantic networks rather than a hemispheric or regional shift across brain areas.",
"title": ""
},
{
"docid": "8feb5dce809acf0efb63d322f0526fcf",
"text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.",
"title": ""
}
] | [
{
"docid": "b403f37f0c27d4fe2b0f398c4c72f7a6",
"text": "In this work we present a novel approach to predict the function of proteins in protein-protein interaction (PPI) networks. We classify existing approaches into inductive and transductive approaches, and into local and global approaches. As of yet, among the group of inductive approaches, only local ones have been proposed for protein function prediction. We here introduce a protein description formalism that also includes global information, namely information that locates a protein relative to specific important proteins in the network. We analyze the effect on function prediction accuracy of selecting a different number of important proteins. With around 70 important proteins, even in large graphs, our method makes good and stable predictions. Furthermore, we investigate whether our method also classifies proteins accurately on more detailed function levels. We examined up to five different function levels. The method is benchmarked on four datasets where we found classification performance according to F-measure values indeed improves by 9 percent over the benchmark methods employed.",
"title": ""
},
{
"docid": "bdc8cf5c66c4e0c29de33d3d1fcb5234",
"text": "In order to fully understand the sensory, perceptual, and cognitive issues associated with helmet-/head-mounted displays (HMDs), it is essential to possess an understanding of exactly what constitutes an HMD, the various design types, their advantages and limitations, and their applications. It also is useful to explore the developmental history of these systems. Such an exploration can reveal the major engineering, human factors, and ergonomic issues encountered in the development cycle. These identified issues usually are indicators of where the most attention needs to be placed when evaluating the usefulness of such systems. New HMD systems are implemented because they are intended to provide some specific capability or performance enhancement. However, these improvements always come at a cost. In reality, the introduction of technology is a tradeoff endeavor. It is necessary to identify and assess the tradeoffs that impact overall system and user sensory systems performance. HMD developers have often and incorrectly assumed that the human visual and auditory systems are fully capable of accepting the added sensory and cognitive demands of an HMD system without incurring performance degradation or introducing perceptual illusions. Situation awareness (SA), essential in preventing actions or inactions that lead to catastrophic outcomes, may be degraded if the HMD interferes with normal perceptual processes, resulting in misinterpretations or misperceptions (illusions). As HMD applications increase, it is important to maintain an awareness of both current and future programs. Unfortunately, in these developmental programs, one factor still is often minimized. This factor is how the user accepts and eventually uses the HMD. In the demanding rigors of warfare, the user rapidly decides whether using a new HMD, intended to provide tactical and other information, outweighs the impact the HMD has on survival and immediate mission success. If the system requires an unacceptable compromise in any aspect of mission completion deemed critical to the Warfighter, the HMD will not be used. Technology in which the Warfighter does have confidence or determines to be a liability will go unused.",
"title": ""
},
{
"docid": "095dd4efbb23bc91b72dea1cd1c627ab",
"text": "Cell-cell communication is critical across an assortment of physiological and pathological processes. Extracellular vesicles (EVs) represent an integral facet of intercellular communication largely through the transfer of functional cargo such as proteins, messenger RNAs (mRNAs), microRNA (miRNAs), DNAs and lipids. EVs, especially exosomes and shed microvesicles, represent an important delivery medium in the tumour micro-environment through the reciprocal dissemination of signals between cancer and resident stromal cells to facilitate tumorigenesis and metastasis. An important step of the metastatic cascade is the reprogramming of cancer cells from an epithelial to mesenchymal phenotype (epithelial-mesenchymal transition, EMT), which is associated with increased aggressiveness, invasiveness and metastatic potential. There is now increasing evidence demonstrating that EVs released by cells undergoing EMT are reprogrammed (protein and RNA content) during this process. This review summarises current knowledge of EV-mediated functional transfer of proteins and RNA species (mRNA, miRNA, long non-coding RNA) between cells in cancer biology and the EMT process. An in-depth understanding of EVs associated with EMT, with emphasis on molecular composition (proteins and RNA species), will provide fundamental insights into cancer biology.",
"title": ""
},
{
"docid": "f1df8b69dfec944b474b9b26de135f55",
"text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.",
"title": ""
},
{
"docid": "56179ddce0ba91184cca226d482a2da4",
"text": "An original differential structure using exclusively MOS devices working in the saturation region will be further presented. Performing the great advantage of an excellent linearity, obtained by a proper biasing of the differential core (using original translation and arithmetical mean blocks), the proposed circuit is designed for low-voltage low- power operation. The estimated linearity is obtained for an extended range of the differential input voltage and in the worst case of considering second-order effects that affect MOS transistors operation. The frequency response of the new differential structure is strongly increased by operating all MOS devices in the saturation region. The circuit is implemented in 0.35 mum CMOS technology, SPICE simulations confirming the theoretical estimated results.",
"title": ""
},
{
"docid": "e5f2a33ef8952e1b8c5129e8aa65045c",
"text": "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"title": ""
},
{
"docid": "dfa51004b99bce29e644fbcca4b833a5",
"text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.",
"title": ""
},
{
"docid": "d06dc916942498014f9d00498c1d1d1f",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "e1096df0a86d37c11ed4a31d9e67ac6e",
"text": "............................................................................................................................................... 4",
"title": ""
},
{
"docid": "0be273eb8dfec6a6f71a44f38e8207ba",
"text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.",
"title": ""
},
{
"docid": "8c007238a61730cc2fb20d091d561aea",
"text": "The Class II division 2 (Class II/2) malocclusion as originally defined by E.H. Angle is relatively rare. The orthodontic literature does not agree on the skeletal characteristics of this malocclusion. Several researchers claim that it is characterized by an orthognathic facial pattern and that the malocclusion is dentoalveolar per se. Others claim that the Class II/2 malocclusion has unique skeletal and dentoalveolar characteristics. The present study describes the skeletal and dentoalveolar cephalometric characteristics of 50 patients clinically diagnosed as having Class II/2 malocclusion according to Angle's original criteria. The study compares the findings with those of both a control group of 54 subjects with Class II division I (Class II/1) malocclusion and a second control group of 34 subjects with Class I (Class I) malocclusion. The findings demonstrate definite skeletal and dentoalveolar patterns with the following characteristics: (1) the maxilla is orthognathic, (2) the mandible has relatively short and retrognathic parameters, (3) the chin is relatively prominent, (4) the facial pattern is hypodivergent, (5) the upper central incisors are retroclined, and (6) the overbite is deep. The results demonstrate that, in a sagittal direction, the entity of Angle Class II/2 malocclusion might actually be located between the Angle Class I and the Angle Class II/1 malocclusions. with unique vertical skeletal characteristics.",
"title": ""
},
{
"docid": "29c32c8c447b498f43ec215633305923",
"text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.",
"title": ""
},
{
"docid": "ed13193df5db458d0673ccee69700bc0",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "54fc5bc85ef8022d099fff14ab1b7ce0",
"text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.",
"title": ""
},
{
"docid": "e964d88be0270bc6ee7eb7748868dd3c",
"text": "The standard serial algorithm for strongly connected components is based on depth rst search, which is di cult to parallelize. We describe a divide-and-conquer algorithm for this problem which has signi cantly greater potential for parallelization. For a graph with n vertices in which degrees are bounded by a constant, we show the expected serial running time of our algorithm to be O(n log n).",
"title": ""
},
{
"docid": "18ffa160ffce386993b5c2da5070b364",
"text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.",
"title": ""
},
{
"docid": "16b64bf865bae192b604faaf6f916ff1",
"text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin.1",
"title": ""
},
{
"docid": "b9300a58c4b55bfb0f57b36e5054e5c6",
"text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.",
"title": ""
},
{
"docid": "5ca886592c6bb484bf04847ecfb3469d",
"text": "In power transistor switching circuits, shunt snubbers (dv/dt limiting capacitors) are often used to reduce the turn-off switching loss or prevent reverse-biased second breakdown. Similarly, series snubbers (di/dt limiting inductors) are used to reduce the turn-on switching loss or prevent forward-biased second breakdown. In both cases energy is stored in the reactive element of the snubber and is dissipated during its discharge. If the circuit includes a transformer, a voltage clamp across the transistor may be needed to absorb the energy trapped in the leakage inductance. The action of these typical snubber and clamp arrangements is analyzed and applied to optimize the design of a flyback converter used as a battery charger.",
"title": ""
},
{
"docid": "fee50f8ab87f2b97b83ca4ef92f57410",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
}
] | scidocsrr |
9b439b4dd326e5392be3351868cd1645 | Swing-up of the double pendulum on a cart by feedforward and feedback control with experimental validation | [
{
"docid": "d61ff7159a1559ec2c4be9450c1ad3b6",
"text": "This paper presents the control of an underactuated two-link robot called the Pendubot. We propose a controller for swinging the linkage and rise it to its uppermost unstable equilibrium position. The balancing control is based on an energy approach and the passivity properties of the system.",
"title": ""
}
] | [
{
"docid": "caa30379a2d0b8be2e1b4ddf6e6602c2",
"text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).",
"title": ""
},
{
"docid": "9244b687b0031e895cea1fcf5a0b11da",
"text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.",
"title": ""
},
{
"docid": "15205e074804764a6df0bdb7186c0d8c",
"text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.",
"title": ""
},
{
"docid": "11d551da8299c7da76fbeb22b533c7f1",
"text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.",
"title": ""
},
{
"docid": "5ba3baabc84d02f0039748a4626ace36",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "ab3dd1f92c09e15ee05ab7f65f676afe",
"text": "We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.",
"title": ""
},
{
"docid": "0c34e8355f1635b3679159abd0a82806",
"text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.",
"title": ""
},
{
"docid": "769c1933f833cbe0c79422e3e15a6ff3",
"text": "The concept of presortedness and its use in sorting are studied. Natural ways to measure presortedness are given and some general properties necessary for a measure are proposed. A concept of a sorting algorithm optimal with respect to a measure of presortedness is defined, and examples of such algorithms are given. A new insertion sort algorithm is shown to be optimal with respect to three natural measures. The problem of finding an optimal algorithm for an arbitrary measure is studied, and partial results are proven.",
"title": ""
},
{
"docid": "f3a253dcae5127fcd4e62fd2508eef09",
"text": "ACC: allergic contact cheilitis Bronopol: 2-Bromo-2-nitropropane-1,3-diol MI: methylisothiazolinone MCI: methylchloroisothiazolinone INTRODUCTION Pediatric cheilitis can be a debilitating condition for the child and parents. Patch testing can help isolate allergens to avoid. Here we describe a 2-yearold boy with allergic contact cheilitis improving remarkably after prudent avoidance of contactants and food avoidance.",
"title": ""
},
{
"docid": "dc693ab2e8991630f62caf0f62eb0dc6",
"text": "The paper presents the power amplifier design. The introduction of a practical harmonic balance capability at the device measurement stage brings a number of advantages and challenges. Breaking down this traditional barrier means that the test-bench engineer needs to become more aware of the design process and requirements. The inverse is also true, as the measurement specifications for a harmonically tuned amplifier are a bit more complex than just the measurement of load-pull contours. We hope that the new level of integration between both will also result in better exchanges between both sides and go beyond showing either very accurate, highly tuned device models, or using the device model as the traditional scapegoat for unsuccessful PA designs. A nonlinear model and its quality can now be diagnosed through direct comparison of simulated and measured wave forms. The quality of a PA design can be verified by placing the device within the measurement system, practical harmonic balance emulator into the same impedance state in which it will operate in the actual realized design.",
"title": ""
},
{
"docid": "a161b0fe0b38381a96f02694fd84c3bf",
"text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.",
"title": ""
},
{
"docid": "1c16fa259b56e3d64f2468fdf758693a",
"text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.",
"title": ""
},
{
"docid": "ccc70871f57f25da6141a7083bdf5174",
"text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap",
"title": ""
},
{
"docid": "f4bd8831ff5bf3372b2ab11d7c53a64b",
"text": "The demonstration that dopamine loss is the key pathological feature of Parkinson's disease (PD), and the subsequent introduction of levodopa have revolutionalized the field of PD therapeutics. This review will discuss the significant progress that has been made in the development of new pharmacological and surgical tools to treat PD motor symptoms since this major breakthrough in the 1960s. However, we will also highlight some of the challenges the field of PD therapeutics has been struggling with during the past decades. The lack of neuroprotective therapies and the limited treatment strategies for the nonmotor symptoms of the disease (ie, cognitive impairments, autonomic dysfunctions, psychiatric disorders, etc.) are among the most pressing issues to be addressed in the years to come. It appears that the combination of early PD nonmotor symptoms with imaging of the nigrostriatal dopaminergic system offers a promising path toward the identification of PD biomarkers, which, once characterized, will set the stage for efficient use of neuroprotective agents that could slow down and alter the course of the disease.",
"title": ""
},
{
"docid": "f5f1300baf7ed92626c912b98b6308c9",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
},
{
"docid": "4f58172c8101b67b9cd544b25d09f2e2",
"text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.",
"title": ""
},
{
"docid": "48019a3106c6d74e4cfcc5ac596d4617",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "91ed0637e0533801be8b03d5ad21d586",
"text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.",
"title": ""
},
{
"docid": "9a12ec03e4521a33a7e76c0c538b6b43",
"text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.",
"title": ""
},
{
"docid": "c72dc472d12c9c822ae240bec5d57c37",
"text": "The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.",
"title": ""
}
] | scidocsrr |
05c93893f503dc646716fb23d52ebad1 | 3D Printing Your Wireless Coverage | [
{
"docid": "1f39815e008e895632403bbe9456acad",
"text": "Information on site-specific spectrum characteristics is essential to evaluate and improve the performance of wireless networks. However, it is usually very costly to obtain accurate spectrum-condition information in heterogeneous wireless environments. This paper presents a novel spectrum-survey system, called Sybot (Spectrum survey robot), that guides network engineers to efficiently monitor the spectrum condition (e.g., RSS) of WiFi networks. Sybot effectively controls mobility and employs three disparate monitoring techniques - complete, selective, and diagnostic - that help produce and maintain an accurate spectrum-condition map for challenging indoor WiFi networks. By adaptively triggering the most suitable of the three techniques, Sybot captures spatio-temporal changes in spectrum condition. Moreover, based on the monitoring results, Sybot automatically determines several key survey parameters, such as site-specific measurement time and space granularities. Sybot has been prototyped with a commodity IEEE 802.11 router and Linux OS, and experimentally evaluated, demonstrating its ability to generate accurate spectrum-condition maps while reducing the measurement effort (space, time) by more than 56%.",
"title": ""
},
{
"docid": "080dbf49eca85711f26d4e0d8386937a",
"text": "In this work, we investigate the use of directional antennas and beam steering techniques to improve performance of 802.11 links in the context of communication between amoving vehicle and roadside APs. To this end, we develop a framework called MobiSteer that provides practical approaches to perform beam steering. MobiSteer can operate in two modes - cached mode - where it uses prior radiosurvey data collected during \"idle\" drives, and online mode, where it uses probing. The goal is to select the best AP and beam combination at each point along the drive given the available information, so that the throughput can be maximized. For the cached mode, an optimal algorithm for AP and beam selection is developed that factors in all overheads.\n We provide extensive experimental results using a commercially available eight element phased-array antenna. In the experiments, we use controlled scenarios with our own APs, in two different multipath environments, as well as in situ scenarios, where we use APs already deployed in an urban region - to demonstrate the performance advantage of using MobiSteer over using an equivalent omni-directional antenna. We show that MobiSteer improves the connectivity duration as well as PHY-layer data rate due to better SNR provisioning. In particular, MobiSteer improves the throughput in the controlled experiments by a factor of 2 - 4. In in situ experiments, it improves the connectivity duration by more than a factor of 2 and average SNR by about 15 dB.",
"title": ""
}
] | [
{
"docid": "ff56bae298b25accf6cd8c2710160bad",
"text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.",
"title": ""
},
{
"docid": "b1d61ca503702f950ef1275b904850e7",
"text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.",
"title": ""
},
{
"docid": "9746a126b884fe5e542ebb31f814c281",
"text": "LLC resonant DC/DC converters are becoming popular in computing applications, such as telecom, server systems. For these applications, it is required to meet the EMI standard. In this paper, novel EMI noise transferring path and EMI model for LLC resonant DC/DC converters are proposed. DM and CM noise of LLC resonant converter are analyzed. Several EMI noise reduction approaches are proposed. Shield layers are applied to reduce CM noise. By properly choosing the ground point of shield layer, significant noise reduction can be obtained. With extra EMI balance capacitor, CM noise can be reduced further. Two channel interleaving LLC resonant converters are proposed to cancel the CM current. Conceptually, when two channels operate with 180 degree phase shift, CM current can be canceled. Therefore, the significant EMI noise reduction can be achieved.",
"title": ""
},
{
"docid": "7d1a7bc7809a578cd317dfb8ba5b7678",
"text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.",
"title": ""
},
{
"docid": "a79424d0ec38c2355b288364f45f90de",
"text": "This paper mainly deals with various classification algorithms namely, Bayes. NaiveBayes, Bayes. BayesNet, Bayes. NaiveBayesUpdatable, J48, Randomforest, and Multi Layer Perceptron. It analyzes the hepatitis patients from the UC Irvine machine learning repository. The results of the classification model are accuracy and time. Finally, it concludes that the Naive Bayes performance is better than other classification techniques for hepatitis patients.",
"title": ""
},
{
"docid": "a04e2df0d6ca5eae1db6569b43b897bd",
"text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "a5e01cfeb798d091dd3f2af1a738885b",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "758978c4b8f3bdd0a57fe9865892fbc3",
"text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.",
"title": ""
},
{
"docid": "12a5fb7867cddaca43c3508b0c1a1ed2",
"text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.",
"title": ""
},
{
"docid": "746f77aad26e3e3492ef021ac0d7da6a",
"text": "The proliferation of mobile computing and smartphone technologies has resulted in an increasing number and range of services from myriad service providers. These mobile service providers support numerous emerging services with differing quality metrics but similar functionality. Facilitating an automated service workflow requires fast selection and composition of services from the services pool. The mobile environment is ambient and dynamic in nature, requiring more efficient techniques to deliver the required service composition promptly to users. Selecting the optimum required services in a minimal time from the numerous sets of dynamic services is a challenge. This work addresses the challenge as an optimization problem. An algorithm is developed by combining particle swarm optimization and k-means clustering. It runs in parallel using MapReduce in the Hadoop platform. By using parallel processing, the optimum service composition is obtained in significantly less time than alternative algorithms. This is essential for handling large amounts of heterogeneous data and services from various sources in the mobile environment. The suitability of this proposed approach for big data-driven service composition is validated through modeling and simulation.",
"title": ""
},
{
"docid": "7ebbb9ebc94c72997895b4141de6f67a",
"text": "Purpose – The purpose of this paper is to highlight the potential role that the so-called “toxic triangle” (Padilla et al., 2007) can play in undermining the processes around effectiveness. It is the interaction between leaders, organisational members, and the environmental context in which those interactions occur that has the potential to generate dysfunctional behaviours and processes. The paper seeks to set out a set of issues that would seem to be worthy of further consideration within the Journal and which deal with the relationships between organisational effectiveness and the threats from insiders. Design/methodology/approach – The paper adopts a systems approach to the threats from insiders and the manner in which it impacts on organisation effectiveness. The ultimate goal of the paper is to stimulate further debate and discussion around the issues. Findings – The paper adds to the discussions around effectiveness by highlighting how senior managers can create the conditions in which failure can occur through the erosion of controls, poor decision making, and the creation of a culture that has the potential to generate failure. Within this setting, insiders can serve to trigger a series of failures by their actions and for which the controls in place are either ineffective or have been by-passed as a result of insider knowledge. Research limitations/implications – The issues raised in this paper need to be tested empirically as a means of providing a clear evidence base in support of their relationships with the generation of organisational ineffectiveness. Practical implications – The paper aims to raise awareness and stimulate thinking by practising managers around the role that the “toxic triangle” of issues can play in creating the conditions by which organisations can incubate the potential for crisis. Originality/value – The paper seeks to bring together a disparate body of published work within the context of “organisational effectiveness” and sets out a series of dark characteristics that organisations need to consider if they are to avoid failure. The paper argues the case that effectiveness can be a fragile construct and that the mechanisms that generate failure also need to be actively considered when discussing what effectiveness means in practice.",
"title": ""
},
{
"docid": "e36bc2b20c8fb5ba6d03672f7896a92c",
"text": "We study the adaptation of convolutional neural networks to the complex temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert features, which are currently used widely and well regarded in the field and we show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task.",
"title": ""
},
{
"docid": "04ef2056dd9490820fd4309c906840aa",
"text": "A millimeter-wave filtering monopulse antenna array based on substrate integrated waveguide (SIW) technology is proposed, manufactured, and tested in this communication. The proposed antenna array consists of a filter, a monopulse comparator, a feed network, and four antennas. A square dual-mode SIW cavity is designed to realize the monopulse comparator, in which internal coupling slots are located at its diagonal lines for the purpose of meeting the internal coupling coefficiencies in both sum and difference channels. Then, a four-output filter including the monopulse comparator is synthesized efficiently by modifying the coupling matrix of a single-ended filter. Finally, each SIW resonator coupled with those four outputs of the filter is replaced by a cavity-backed slot antenna so as to form the proposed filtering antenna array. A prototype is demonstrated at Ka band with a center frequency of 29.25 GHz and fractional bandwidth of 1.2%. Our measurement shows that, for the H-plane, the sidelobe levels of the sum pattern are less than -15 dB and the null depths of the difference pattern are less than -28 dB. The maximum measured gain of the sum beam at the center operating frequency is 8.1 dBi.",
"title": ""
},
{
"docid": "8aca118a1171c2c3fd7057468adc84b2",
"text": "Automatically constructing a complete documentary or educational film from scattered pieces of images and knowledge is a significant challenge. Even when this information is provided in an annotated format, the problems of ordering, structuring and animating sequences of images, and producing natural language descriptions that correspond to those images within multiple constraints, are each individually difficult tasks. This paper describes an approach for tackling these problems through a combination of rhetorical structures with narrative and film theory to produce movie-like visual animations from still images along with natural language generation techniques needed to produce text descriptions of what is being seen in the animations. The use of rhetorical structures from NLG is used to integrate separate components for video creation and script generation. We further describe an implementation, named GLAMOUR, that produces actual, short video documentaries, focusing on a cultural heritage domain, and that have been evaluated by professional filmmakers. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "0048b244bd55a724f9bcf4dbf5e551a8",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "d22e8f2029e114b0c648a2cdfba4978a",
"text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.",
"title": ""
},
{
"docid": "8a16fe77b90f86adcdaf87f873b59d44",
"text": "As computational learning agents move into domains that incur real costs (e.g., autonomous driving or financial investment), it will be necessary to learn good policies without numerous high-cost learning trials. One promising approach to reducing sample complexity of learning a task is knowledge transfer from humans to agents. Ideally, methods of transfer should be accessible to anyone with task knowledge, regardless of that person's expertise in programming and AI. This paper focuses on allowing a human trainer to interactively shape an agent's policy via reinforcement signals. Specifically, the paper introduces \"Training an Agent Manually via Evaluative Reinforcement,\" or TAMER, a framework that enables such shaping. Differing from previous approaches to interactive shaping, a TAMER agent models the human's reinforcement and exploits its model by choosing actions expected to be most highly reinforced. Results from two domains demonstrate that lay users can train TAMER agents without defining an environmental reward function (as in an MDP) and indicate that human training within the TAMER framework can reduce sample complexity over autonomous learning algorithms.",
"title": ""
},
{
"docid": "195f4ab1fe7950d011a9fd01a567128b",
"text": "To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization ability is, however, observed for the saliency-boosted model on unseen data.",
"title": ""
},
{
"docid": "95063d2a5b2df6c13c89ecfdceeb6c06",
"text": "This paper proposes a novel reference signal generation method for the unified power quality conditioner (UPQC) adopted to compensate current and voltage-quality problems of sensitive loads. The UPQC consists of a shunt and series converter having a common dc link. The shunt converter eliminates current harmonics originating from the nonlinear load side and the series converter mitigates voltage sag/swell originating from the supply side. The developed controllers for shunt and series converters are based on an enhanced phase-locked loop and nonlinear adaptive filter. The dc link control strategy is based on the fuzzy-logic controller. A fast sag/swell detection method is also presented. The efficacy of the proposed system is tested through simulation studies using the Power System Computer Aided Design/Electromagnetic Transients dc analysis program. The proposed UPQC achieves superior capability of mitigating the effects of voltage sag/swell and suppressing the load current harmonics under distorted supply conditions.",
"title": ""
}
] | scidocsrr |
2ca43e0cfb47fbd2b5f480a29feeab7a | Diet eyeglasses: Recognising food chewing using EMG and smart eyeglasses | [
{
"docid": "634ded02136fef04ec8c64a819522e7b",
"text": "Maintaining appropriate levels of food intake anddeveloping regularity in eating habits is crucial to weight lossand the preservation of a healthy lifestyle. Moreover, maintainingawareness of one's own eating habits is an important steptowards portion control and ultimately, weight loss. Though manysolutions have been proposed in the area of physical activitymonitoring, few works attempt to monitor an individual's foodintake by means of a noninvasive, wearable platform. In thispaper, we introduce a novel nutrition-intake monitoring systembased around a wearable, mobile, wireless-enabled necklacefeaturing an embedded piezoelectric sensor. We also propose aframework capable of estimating volume of meals, identifyinglong-term trends in eating habits, and providing classificationbetween solid foods and liquids with an F-Measure of 85% and86% respectively. The data is presented to the user in the formof a mobile application.",
"title": ""
}
] | [
{
"docid": "ae59ef9772ea8f8277a2d91030bd6050",
"text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.",
"title": ""
},
{
"docid": "bc5a3cd619be11132ea39907f732bf4c",
"text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.",
"title": ""
},
{
"docid": "983cae67894ae61b2301dc79713969c0",
"text": "Although there is no analytical framework for assessing the organizational benefits of ERP systems, several researchers have indicated that the balanced scorecard (BSC) approach may be an appropriate technique for evaluating the performance of ERP systems. This paper fills this gap in the literature by providing a balanced-scorecard based framework for valuing the strategic contributions of an ERP system. Using a successful SAP implementation by a major international aircraft engine manufacturing and service organization as a case study, this paper illustrates that an ERP system does indeed impacts the business objectives of the firm and derives a new innovative ERP framework for valuing the strategic impacts of ERP systems. The ERP valuation framework, called here an ERP scorecard, integrates the four Kaplan and Norton’s balanced scorecard dimensions with Zuboff’s automate, informate and transformate goals of information systems to provide a practical approach for measuring the contributions and impacts of ERP systems on the strategic goals of the company. # 2005 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "14dec918e2b6b4678c38f533e0f1c9c1",
"text": "A method is presented to assess stability changes in waves in early-stage ship design. The method is practical: the calculations can be completed quickly and can be applied as soon as lines are available. The intended use of the described method is for preliminary analysis. If stability changes that result in large roll motion are indicated early in the design process, this permits planning and budgeting for direct assessments using numerical simulations and/or model experiments. The main use of the proposed method is for the justification for hull form shape modification or for necessary additional analysis to better quantify potentially increased stability risk. The method is based on the evaluation of changing stability in irregular seas and can be applied to any type of ship. To demonstrate the robustness of the method, results for ten naval ship types are presented and discussed. The proposed method is shown to identify ships with known risk for large stability changes in waves.",
"title": ""
},
{
"docid": "fe16f2d946b3ea7bc1169d5667365dbe",
"text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.",
"title": ""
},
{
"docid": "8f930fc4f06f8b17e2826f0975af1fa1",
"text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.",
"title": ""
},
{
"docid": "413d0b457cc1b96bf65d8a3e1c98ed41",
"text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.",
"title": ""
},
{
"docid": "85c360e0354e5eab69dc26b7a2dd715e",
"text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.",
"title": ""
},
{
"docid": "469e5c159900b9d6662a9bfe9e01fde7",
"text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.",
"title": ""
},
{
"docid": "dceef3bbc02b4c83918d87d56cad863e",
"text": "In this paper we present an automated way of using spare CPU resources within a shared memory multi-processor or multi-core machine. Our approach is (i) to profile the execution of a program, (ii) from this to identify pieces of work which are promising sources of parallelism, (iii) recompile the program with this work being performed speculatively via a work-stealing system and then (iv) to detect at run-time any attempt to perform operations that would reveal the presence of speculation.\n We assess the practicality of the approach through an implementation based on GHC 6.6 along with a limit study based on the execution profiles we gathered. We support the full Concurrent Haskell language compiled with traditional optimizations and including I/O operations and synchronization as well as pure computation. We use 20 of the larger programs from the 'nofib' benchmark suite. The limit study shows that programs vary a lot in the parallelism we can identify: some have none, 16 have a potential 2x speed-up, 4 have 32x. In practice, on a 4-core processor, we get 10-80% speed-ups on 7 programs. This is mainly achieved at the addition of a second core rather than beyond this.\n This approach is therefore not a replacement for manual parallelization, but rather a way of squeezing extra performance out of the threads of an already-parallel program or out of a program that has not yet been parallelized.",
"title": ""
},
{
"docid": "e8478d17694b39bd252175139a5ca14d",
"text": "Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonably well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to dispel this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.",
"title": ""
},
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "b63e88701018a80a7815ee43b62e90fd",
"text": "Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.",
"title": ""
},
{
"docid": "f3e63f3fb0ce0e74697e0a74867d9671",
"text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.",
"title": ""
},
{
"docid": "4765f21109d36fb2631325fd0442aeac",
"text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.",
"title": ""
},
{
"docid": "faa6f6dff0ed9b8b6eba8991c93a25fc",
"text": "We present a system for Answer Selection that integrates fine-grained Question Classification with a Deep Learning model designed for Answer Selection. We detail the necessary changes to the Question Classification taxonomy and system, the creation of a new Entity Identification system and methods of highlighting entities to achieve this objective. Our experiments show that Question Classes are a strong signal to Deep Learning models for Answer Selection, and enable us to outperform the current state of the art in all variations of our experiments except one. In the best configuration, our MRR and MAP scores outperform the current state of the art by between 3 and 5 points on both versions of the TREC Answer Selection test set, a standard dataset for this task.",
"title": ""
},
{
"docid": "49cf26b6c6dde96df9009a68758ee506",
"text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang [email protected] (Yang Xiao), [email protected] (Jun Chen), yancheng [email protected] (Yancheng Wang), [email protected] (Zhiguo Cao), [email protected] (Joey Tianyi Zhou), [email protected] (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.",
"title": ""
},
{
"docid": "ba8467f6b5a28a2b076f75ac353334a0",
"text": "Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.",
"title": ""
},
{
"docid": "4ede3f2caa829e60e4f87a9b516e28bd",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "5898f4adaf86393972bcbf4c4ab91540",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] | scidocsrr |
d7ce4517a8cd27f74a65cfabfe120039 | LightBox: SGX-assisted Secure Network Functions at Near-native Speed | [
{
"docid": "2f2801e502492a648a0758b6e33fe19d",
"text": "Intel is developing the Intel® Software Guard Extensions (Intel® SGX) technology, an extension to Intel® Architecture for generating protected software containers. The container is referred to as an enclave. Inside the enclave, software’s code, data, and stack are protected by hardware enforced access control policies that prevent attacks against the enclave’s content. In an era where software and services are deployed over the Internet, it is critical to be able to securely provision enclaves remotely, over the wire or air, to know with confidence that the secrets are protected and to be able to save secrets in non-volatile memory for future use. This paper describes the technology components that allow provisioning of secrets to an enclave. These components include a method to generate a hardware based attestation of the software running inside an enclave and a means for enclave software to seal secrets and export them outside of the enclave (for example store them in non-volatile memory) such that only the same enclave software would be able un-seal them back to their original form.",
"title": ""
},
{
"docid": "25a28d9319013ef1a38823d273098ebb",
"text": "Many systems run rich analytics on sensitive data in the cloud, but are prone to data breaches. Hardware enclaves promise data confidentiality and secure execution of arbitrary computation, yet still suffer from access pattern leakage. We propose Opaque, a distributed data analytics platform supporting a wide range of queries while providing strong security guarantees. Opaque introduces new distributed oblivious relational operators that hide access patterns, and new query planning techniques to optimize these new operators. Opaque is implemented on Spark SQL with few changes to the underlying system. Opaque provides data encryption, authentication and computation verification with a performance ranging from 52% faster to 3.3x slower as compared to vanilla Spark SQL; obliviousness comes with a 1.6–46x overhead. Opaque provides an improvement of three orders of magnitude over state-of-the-art oblivious protocols, and our query optimization techniques improve performance by 2–5x.",
"title": ""
}
] | [
{
"docid": "502d31f5f473f3e93ee86bdfd79e0d75",
"text": "The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics.\n By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes \"under lambdas.\" We prove that machine evaluation is equivalent to standard-order evaluation.\n Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control.\n To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.",
"title": ""
},
{
"docid": "0cd96187b257ee09060768650432fe6d",
"text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.",
"title": ""
},
{
"docid": "69b5c883c7145d2184f77c92e61b2542",
"text": "WiFi network traffics will be expected to increase sharply in the coming years, since WiFi network is commonly used for local area connectivity. Unfortunately, there are difficulties in WiFi network research beforehand, since there is no common dataset between researchers on this area. Recently, AWID dataset was published as a comprehensive WiFi network dataset, which derived from real WiFi traces. The previous work on this AWID dataset was unable to classify Impersonation Attack sufficiently. Hence, we focus on optimizing the Impersonation Attack detection. Feature selection can overcome this problem by selecting the most important features for detecting an arbitrary class. We leverage Artificial Neural Network (ANN) for the feature selection and apply Stacked Auto Encoder (SAE), a deep learning algorithm as a classifier for AWID Dataset. Our experiments show that the reduced input features have significantly improved to detect the Impersonation Attack.",
"title": ""
},
{
"docid": "43f200b97e2b6cb9c62bbbe71bed72e3",
"text": "We compare nonreturn-to-zero (NRZ) with return-to-zero (RZ) modulation format for wavelength-division-multiplexed systems operating at data rates up to 40 Gb/s. We find that in 10-40-Gb/s dispersion-managed systems (single-mode fiber alternating with dispersion compensating fiber), NRZ is more adversely affected by nonlinearities, whereas RZ is more affected by dispersion. In this dispersion map, 10- and 20-Gb/s systems operate better using RZ modulation format because nonlinearity dominates. However, 40-Gb/s systems favor the usage of NRZ because dispersion becomes the key limiting factor at 40 Gb/s.",
"title": ""
},
{
"docid": "040c577ee6146a72edfd664b9d6aa1ae",
"text": "We focus on the role that community plays in the continuum of disaster preparedness, response and recovery, and we explore where community fits in conceptual frameworks concerning disaster decision-making. We offer an overview of models developed in the literature as well as insights drawn from research related to Hurricane Katrina. Each model illustrates some aspect of the spectrum of disaster preparedness and recovery, beginning with risk perception and vulnerability assessments, and proceeding to notions of resiliency and capacity building. Concepts like social resilience are related to theories of ‘‘social capital,’’ which stress the importance of social networks, reciprocity, and interpersonal trust. These allow individuals and groups to accomplish greater things than they could by their isolated efforts. We trace two contrasting notions of community to Tocqueville. On the one hand, community is simply an aggregation of individual persons, that is, a population. As individuals, they have only limited capacity to act effectively or make decisions for themselves, and they are strongly subject to administrative decisions that authorities impose on them. On the other hand, community is an autonomous actor, with its own interests, preferences, resources, and capabilities. This definition of community has also been embraced by community-based participatory researchers and has been thought to offer an approach that is more active and advocacy oriented. We conclude with a discussion of the strengths and weaknesses of community in disaster response and in disaster research.",
"title": ""
},
{
"docid": "0552c786fe0030df69b2095d78c20485",
"text": "In recent years, real-time processing and analytics systems for big data--in the context of Business Intelligence (BI)--have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been emerging in recent years. This paper presents a survey of the open source technologies that support big data processing in a real-time/near real-time fashion, including their system architectures and platforms.",
"title": ""
},
{
"docid": "dcf4278becbc530d9648b5df4a64ec53",
"text": "Variable speed operation is essential for large wind turbines in order to optimize the energy capture under variable wind speed conditions. Variable speed wind turbines require a power electronic interface converter to permit connection with the grid. The power electronics can be either partially-rated or fully-rated [1]. A popular interface method for large wind turbines that is based on a partiallyrated interface is the doubly-fed induction generator (DFIG) system [2]. In the DFIG system, the power electronic interface controls the rotor currents in order to control the electrical torque and thus the rotational speed. Because the power electronics only process the rotor power, which is typically less than 25% of the overall output power, the DFIG offers the advantages of speed control for a reduction in cost and power losses. This report presents a DFIG wind turbine system that is modeled in PLECS and Simulink. A full electrical model that includes the switching converter implementation for the rotor-side power electronics and a dq model of the induction machine is given. The aerodynamics of the wind turbine and the mechanical dynamics of the induction machine are included to extend the use of the model to simulating system operation under variable wind speed conditions. For longer simulations that include these slower mechanical and wind dynamics, an averaged PWM converter model is presented. The averaged electrical model offers improved simulation speed at the expense of neglecting converter switching detail.",
"title": ""
},
{
"docid": "28f1b7635b777cf278cc8d53a5afafb9",
"text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.",
"title": ""
},
{
"docid": "9514201894e516d888c593dbade709bc",
"text": "Code obfuscation is a technique to transform a program into an equivalent one that is harder to be reverse engineered and understood. On Android, well-known obfuscation techniques are shrinking, optimization, renaming, string encryption, control flow transformation, etc. On the other hand, adversaries may also maliciously use obfuscation techniques to hide pirated or stolen software. If pirated software were obfuscated, it would be difficult to detect software theft. To detect illegal software transformed by code obfuscation, one possible approach is to measure software similarity between original and obfuscated programs and determine whether the obfuscated version is an illegal copy of the original version. In this paper, we analyze empirically the effects of code obfuscation on Android app similarity analysis. The empirical measurements were done on five different Android apps with DashO obfuscator. Experimental results show that similarity measures at bytecode level are more effective than those at source code level to analyze software similarity.",
"title": ""
},
{
"docid": "674d347526e5ea2677eec2f2b816935b",
"text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.",
"title": ""
},
{
"docid": "f38530be19fc66121fbce56552ade0ea",
"text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.",
"title": ""
},
{
"docid": "f515695b3d404d29a12a5e8e58a91fc0",
"text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.",
"title": ""
},
{
"docid": "1b5655b91ccd844b5925d329456e3de8",
"text": "In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.",
"title": ""
},
{
"docid": "f14f6d95f13ca6f92fe14c59e3ad0c81",
"text": "The ever-increasing representativeness of software maintenance in the daily effort of software team requires initiatives for enhancing the activities accomplished to provide a good service for users who request a software improvement. This article presents a quantitative approach for evaluating software maintenance services based on cluster analysis techniques. The proposed approach provides a compact characterization of the services delivered by a maintenance organization, including characteristics such as service, waiting, and queue time. The ultimate goal is to help organizations to better understand, manage, and improve their current software maintenance process. We also report in this paper the usage of the proposed approach in a medium-sized organization throughout 2010. This case study shows that 72 software maintenance requests can be grouped in seven distinct clusters containing requests with similar characteristics. The in-depth analysis of the clusters found with our approach can foster the understanding of the nature of the requests and, consequently, it may improve the process followed by the software maintenance team.",
"title": ""
},
{
"docid": "ac8a620e752144e3f4e20c16efb56ebc",
"text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that",
"title": ""
},
{
"docid": "a387781a96a39448ca22b49154aaf80c",
"text": "LEGO is a globally popular toy composed of colorful interlocking plastic bricks that can be assembled in many ways; however, this special feature makes designing a LEGO sculpture particularly challenging. Building a stable sculpture is not easy for a beginner; even an experienced user requires a good deal of time to build one. This paper provides a novel approach to creating a balanced LEGO sculpture for a 3D model in any pose, using centroid adjustment and inner engraving. First, the input 3D model is transformed into a voxel data structure. Next, the model’s centroid is adjusted to an appropriate position using inner engraving to ensure that the model stands stably. A model can stand stably without any struts when the center of mass is moved to the ideal position. Third, voxels are merged into layer-by-layer brick layout assembly instructions. Finally, users will be able to build a LEGO sculpture by following these instructions. The proposed method is demonstrated with a number of LEGO sculptures and the results of the physical experiments are presented.",
"title": ""
},
{
"docid": "37af5d5ee2e4f6b94aa5c93d12f98017",
"text": "This paper reviews prior research in management accounting innovations covering the period 1926-2008. Management accounting innovations refer to the adoption of “newer” or modern forms of management accounting systems such as activity-based costing, activity-based management, time-driven activity-based costing, target costing, and balanced scorecards. Although some prior reviews, covering the period until 2000, place emphasis on modern management accounting techniques, however, we believe that the time gap between 2000 and 2008 could entail many new or innovative accounting issues. We find that research in management accounting innovations has intensified during the period 2000-2008, with the main focus has been on explaining various factors associated with the implementation and the outcome of an innovation. In addition, research in management accounting innovations indicates the dominant use of sociological-based theories and increasing use of field studies. We suggest some directions for future research pertaining to management accounting innovations.",
"title": ""
},
{
"docid": "0e514c165e362de91764f3ddd2a09e15",
"text": "The authors examined how networks of teams integrate their efforts to succeed collectively. They proposed that integration processes used to align efforts among multiple teams are important predictors of multiteam performance. The authors used a multiteam system (MTS) simulation to assess how both cross-team and within-team processes relate to MTS performance over multiple performance episodes that differed in terms of required interdependence levels. They found that cross-team processes predicted MTS performance beyond that accounted for by within-team processes. Further, cross-team processes were more important for MTS effectiveness when there were high cross-team interdependence demands as compared with situations in which teams could work more independently. Results are discussed in terms of extending theory and applications from teams to multiteam systems.",
"title": ""
},
{
"docid": "62cc85ab7517797f50ce5026fbc5617a",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
},
{
"docid": "2b1649b47d2615f3e33c9506dabdc6c6",
"text": "In 1994, amongst a tide of popular books on virtual reality, Grigore Burdea and Philippe Coiffet published a well researched review of the field. Their book, “Virtual Reality Technology,” was notable because it was the first to contain detailed information on force and tactile feedback, areas in which both the authors have conducted extensive research. The book became a classic, and although not intended as such was adopted as the textbook of choice for many university classes in virtual reality. This was due in part to its broad review of the virtual reality technologies based on a strong engineering and scientific focus. Almost ten years later and Burdea and Coiffet have returned with a second edition that builds on the success of the first. While the content of the second edition is largely the same as the first, with almost identical chapter headings, there is a change in focus towards making this more of an educational tool. From their introduction on, it is clear that the authors intend for this to be used as a textbook. Each chapter is filled with definitions, graphs and equations, and ends with a set of review questions. More significantly the book has an accompanying CD which contains a number of excellent video clips and a complete laboratory manual with instruction on how to build desktop VR interfaces using VRML and Java 3D libraries. The manual is a 120 page book with 18 programming assignments and further homework questions. This book provides the instructor with almost all the material they might need for a course in virtual reality. The content itself is well written and researched. The authors have taken the material of the first book and updated much of it to reflect a decade of growth in the VR field. A strong theme running through the book is the rising dominance of PC-based virtual reality platforms, particularly in the chapter on computing architectures. Readers will be exposed to discussion on graphics rendering pipelines, PC graphics architecture, and clusters. In the fast changing world of PC hardware some of the hardware mentioned has already become dated, but the content still gives an essential grounding in the technological principles. Discussion of hardware architectures is also complemented by chapters on input and display devices, modeling, and programming toolkits. These were also in the original addition, but have been updated to reflect the invention of devices such as the Phantom force-feedback arm, or new software toolkits such as Java 3D. Interestingly, rather than having a whole chapter on force feedback, this now becomes part of a more general chapter on output devices. Burdea’s own work on the Rutgers Master glove with force feedback is barely mentioned at all. As with any book on a field as rich as virtual reality it is impossible to cover all possible topics in significant depth. The authors handle this by providing hundreds of references to the relevant technical literature, enabling readers to study topics in as much depth as they are interested in. In the first book a separate bibliography and list of VR companies and laboratories was provided at the end of the book. In the second edition, references are provided at the end of each chapter. This makes each chapter more self contained and suitable for studying in almost any order, once the introduction has been read. In this way the book provides an ideal introduction to a student or researcher who will want to know where to find out more. Despite its considerable strengths there are a number of weaknesses the authors might want to address when they produce a third edition. Some of these are minor. For example, the first edition had a collection of color photographs showing a variety of VR technologies and environments. Unfortunately these are missing from the second edition, and although the many black and white pictures are excellent, there are aspects of the technology that can be best understood by seeing it in color. As a teaching tool, it would have been good for the authors to provide more code samples on the enclosed Presence, Vol. 12, No. 6, December 2003, 663–664",
"title": ""
}
] | scidocsrr |
b2686fb00b3264a78e511ea71d26b947 | Prenatal developmental origins of behavior and mental health: The influence of maternal stress in pregnancy | [
{
"docid": "8980bdf92581e8a0816364362fec409b",
"text": "OBJECTIVE\nPrenatal exposure to inappropriate levels of glucocorticoids (GCs) and maternal stress are putative mechanisms for the fetal programming of later health outcomes. The current investigation examined the influence of prenatal maternal cortisol and maternal psychosocial stress on infant physiological and behavioral responses to stress.\n\n\nMETHODS\nThe study sample comprised 116 women and their full term infants. Maternal plasma cortisol and report of stress, anxiety and depression were assessed at 15, 19, 25, 31 and 36 + weeks' gestational age. Infant cortisol and behavioral responses to the painful stress of a heel-stick blood draw were evaluated at 24 hours after birth. The association between prenatal maternal measures and infant cortisol and behavioral stress responses was examined using hierarchical linear growth curve modeling.\n\n\nRESULTS\nA larger infant cortisol response to the heel-stick procedure was associated with exposure to elevated concentrations of maternal cortisol during the late second and third trimesters. Additionally, a slower rate of behavioral recovery from the painful stress of a heel-stick blood draw was predicted by elevated levels of maternal cortisol early in pregnancy as well as prenatal maternal psychosocial stress throughout gestation. These associations could not be explained by mode of delivery, prenatal medical history, socioeconomic status or child race, sex or birth order.\n\n\nCONCLUSIONS\nThese data suggest that exposure to maternal cortisol and psychosocial stress exerts programming influences on the developing fetus with consequences for infant stress regulation.",
"title": ""
}
] | [
{
"docid": "b3ea5290cad741aa7c3da97ab1c24ccd",
"text": "Methods of alloplastic forehead augmentation using soft expanded polytetrafluoroethylene (ePTFE) and silicone implants are described. Soft ePTFE forehead implantation has the advantage of being technically simpler, with better fixation. The disadvantages are a limited degree of forehead augmentation and higher chance of infection. Properly fabricated soft silicone implants provide potential for larger degree of forehead silhouette augmentation with less risk of infection. The corrugated edge and central perforations of the implant minimize mobility and capsule contraction.",
"title": ""
},
{
"docid": "b120095067684a67fe3327d18860e760",
"text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.",
"title": ""
},
{
"docid": "dae877409dca88fc6fed5cf6536e65ad",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "7abdd1fc5f2a8c5b7b19a6a30eadad0a",
"text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.",
"title": ""
},
{
"docid": "3a8be402f75af666076f441c124ac911",
"text": "This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of “building blocks” in GP.",
"title": ""
},
{
"docid": "f23ff5a1275911d47459fa9304b4cf7f",
"text": "The input to a neural sequence-tosequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoderdecoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.",
"title": ""
},
{
"docid": "9737e400108f6327be17d23db07b2e75",
"text": "While recent deep monocular depth estimation approaches based on supervised regression have achieved remarkable performance, costly ground truth annotations are required during training. To cope with this issue, in this paper we present a novel unsupervised deep learning approach for predicting depth maps and show that the depth estimation task can be effectively tackled within an adversarial learning framework. Specifically, we propose a deep generative network that learns to predict the correspondence field (i.e. the disparity map) between two image views in a calibrated stereo camera setting. The proposed architecture consists of two generative sub-networks jointly trained with adversarial learning for reconstructing the disparity map and organized in a cycle such as to provide mutual constraints and supervision to each other. Extensive experiments on the publicly available datasets KITTI and Cityscapes demonstrate the effectiveness of the proposed model and competitive results with state of the art methods. The code is available at https://github.com/andrea-pilzer/unsup-stereo-depthGAN",
"title": ""
},
{
"docid": "5519eea017d8f69804060f5e40748b1a",
"text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.",
"title": ""
},
{
"docid": "69624d1ab7b438d5ff4b5192f492a11a",
"text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.",
"title": ""
},
{
"docid": "25226432d192bf7192cf6d8dbee3cab7",
"text": "According to the distributional inclusion hypothesis, entailment between words can be measured via the feature inclusions of their distributional vectors. In recent work, we showed how this hypothesis can be extended from words to phrases and sentences in the setting of compositional distributional semantics. This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors. We present results for relational, Frobenius, projective, and holistic methods and compare them to the simple vector addition, multiplication, min, and max models. The degrees of entailment thus obtained are evaluated via a variety of existing wordbased measures, such as Weed’s and Clarke’s, KL-divergence, APinc, balAPinc, and two of our previously proposed metrics at the phrase/sentence level. We perform experiments on three entailment datasets, investigating which version of tensor-based composition achieves the highest performance when combined with the sentence-level measures.",
"title": ""
},
{
"docid": "af45d1bbdcbd94bbe5ae2cc0936f3650",
"text": "Rationale: The imidazopyridine hypnotic zolpidem may produce less memory and cognitive impairment than classic benzodiazepines, due to its relatively low binding affinity for the benzodiazepine receptor subtypes found in areas of the brain which are involved in learning and memory. Objectives: The study was designed to compare the acute effects of single oral doses of zolpidem (5, 10, 20 mg/70 kg) and the benzodiazepine hypnotic triazolam (0.125, 0.25, and 0.5 mg/70 kg) on specific memory and attentional processes. Methods: Drug effects on memory for target (i.e., focal) information and contextual information (i.e., peripheral details surrounding a target stimulus presentation) were evaluated using a source monitoring paradigm, and drug effects on selective attention mechanisms were evaluated using a negative priming paradigm, in 18 healthy volunteers in a double-blind, placebo-controlled, crossover design. Results: Triazolam and zolpidem produced strikingly similar dose-related effects on memory for target information. Both triazolam and zolpidem impaired subjects’ ability to remember whether a word stimulus had been presented to them on the computer screen or whether they had been asked to generate the stimulus based on an antonym cue (memory for the origin of a stimulus, which is one type of contextual information). The results suggested that triazolam, but not zolpidem, impaired memory for the screen location of picture stimuli (spatial contextual information). Although both triazolam and zolpidem increased overall reaction time in the negative priming task, only triazolam increased the magnitude of negative priming relative to placebo. Conclusions: The observed differences between triazolam and zolpidem have implications for the cognitive and pharmacological mechanisms underlying drug-induced deficits in specific memory and attentional processes, as well for the cognitive and brain mechanisms underlying these processes.",
"title": ""
},
{
"docid": "2c2dee4689e48f1a7c0061ac7d60a16b",
"text": "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. This thesis focuses on active transfer learning under the model shift assumption. We start by proposing two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. By analyzing the risk bounds for the proposed transfer learning algorithms, we show that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗ √ nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we consider a general case where both the support and the model change across domains. We transform both X (features) and Y (labels) by a parameterized-location-scale shift to achieve transfer between tasks. On the other hand, multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. Similar to transfer learning, multi-task problems are also solved by imposing some kind of “smooth” relationship among/between tasks. We study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. Finally, we propose methods to predict the entire distribution P (Y ) and P (Y |X) by transfer, while allowing both marginal and conditional distributions to change. Moreover, we extend this framework to multi-source distribution transfer. We demonstrate the effectiveness of our methods on both synthetic examples and real-world applications, including yield estimation on the grape image dataset, predicting air-quality from Weibo posts for cities, predicting whether a robot successfully climbs over an obstacle, examination score prediction for schools, and location prediction for taxis. Acknowledgments First and foremost, I would like to express my sincere gratitude to my advisor Jeff Schneider, who has been the biggest help during my whole PhD life. His brilliant insights have helped me formulate the problems of this thesis, brainstorm on new ideas and exciting algorithms. I have learnt many things about research from him, including how to organize ideas in a paper, how to design experiments, and how to give a good academic talk. This thesis would not have been possible without his guidance, advice, patience and encouragement. I would like to thank my thesis committee members Christos Faloutsos, Geoff Gordon and Jerry Zhu for providing great insights and feedbacks on my thesis. Christos has been very nice and he always finds time to talk to me even if he is very busy. Geoff has provided great insights on extending my work to classification and helped me clarified many notations/descriptions in my thesis. Jerry has been very helpful in extending my work on the text data and providing me the air quality dataset. I feel very fortunate to have them as my committee members. I would also like to thank Professor Barnabás Póczos, Professor Roman Garnett and Professor Artur Dubrawski, for providing very helpful suggestions and collaborations during my PhD. I am very grateful to many of the faculty members at Carnegie Mellon. Eric Xing’s Machine Learning course has been my introduction course for Machine Learning at Carnegie Mellon and it has taught me a lot about the foundations of machine learning, including all the inspiring machine learning algorithms and the theories behind them. Larry Wasserman’s Intermediate Statistics and Statistical Machine Learning are both wonderful courses and have been keys to my understanding of the statistical perspective of many machine learning algorithms. Geoff Gordon and Ryan Tibshirani’s Convex Optimization course has been a great tutorial for me to develop all the efficient optimizing techniques for the algorithms I have proposed. Further I want to thank all my colleagues and friends at Carnegie Mellon, especially people from the Auton Lab and the Computer Science Department at CMU. I would like to thank Dougal Sutherland, Yifei Ma, Junier Oliva, Tzu-Kuo Huang for insightful discussions and advices for my research. I would also like to thank all my friends who have provided great support and help during my stay at Carnegie Mellon, and to name a few, Nan Li, Junchen Jiang, Guangyu Xia, Zi Yang, Yixin Luo, Lei Li, Lin Xiao, Liu Liu, Yi Zhang, Liang Xiong, Ligia Nistor, Kirthevasan Kandasamy, Madalina Fiterau, Donghan Wang, Yuandong Tian, Brian Coltin. I would also like to thank Prof. Alon Halevy, who has been a great mentor during my summer internship at google research and also has been a great help in my job searching process. Finally I would like to thank my family, my parents Sisi and Tiangui, for their unconditional love, endless support, and unwavering faith in me. I truly thank them for shaping who I am, for teaching me to be a person who would never lose hope and give up.",
"title": ""
},
{
"docid": "7c3457a5ca761b501054e76965b41327",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "463c1df3306820f92be1566c03a2b0f9",
"text": "Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to 'see through' the patient's skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.",
"title": ""
},
{
"docid": "ff67f2bbf20f5ad2bef6641e8e7e3deb",
"text": "An observation one can make when reviewing the literature on physical activity is that health-enhancing exercise habits tend to wear off as soon as individuals enter adolescence. Therefore, exercise habits should be promoted and preserved early in life. This article focuses on the formation of physical exercise habits. First, the literature on motivational determinants of habitual exercise and related behaviours is discussed, and the concept of habit is further explored. Based on this literature, a theoretical model of exercise habit formation is proposed. More specifically, expanding on the idea that habits are the result of automated cognitive processes, it is argued that physical exercise habits are capable of being automatically activated by the situational features that normally precede these behaviours. These habits may enhance health as a result of consistent performance over a long period of time. Subsequently, obstacles to the formation of exercise habits are discussed and interventions that may anticipate these obstacles are presented. Finally, implications for theory and practice are briefly discussed.",
"title": ""
},
{
"docid": "62773348cf1d2cda966ec63f62f93efb",
"text": "In 2003, psychology professor and sex researcher J. Michael Bailey published a book entitled The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. The book's portrayal of male-to-female (MTF) transsexualism, based on a theory developed by sexologist Ray Blanchard, outraged some transgender activists. They believed the book to be typical of much of the biomedical literature on transsexuality-oppressive in both tone and claims, insulting to their senses of self, and damaging to their public identities. Some saw the book as especially dangerous because it claimed to be based on rigorous science, was published by an imprint of the National Academy of Sciences, and argued that MTF sex changes are motivated primarily by erotic interests and not by the problem of having the gender identity common to one sex in the body of the other. Dissatisfied with the option of merely criticizing the book, a small number of transwomen (particularly Lynn Conway, Andrea James, and Deirdre McCloskey) worked to try to ruin Bailey. Using published and unpublished sources as well as original interviews, this essay traces the history of the backlash against Bailey and his book. It also provides a thorough exegesis of the book's treatment of transsexuality and includes a comprehensive investigation of the merit of the charges made against Bailey that he had behaved unethically, immorally, and illegally in the production of his book. The essay closes with an epilogue that explores what has happened since 2003 to the central ideas and major players in the controversy.",
"title": ""
},
{
"docid": "4e2c4b8fccda7f8c9ca7ffb6ced1ae5a",
"text": "Fog/edge computing, function as a service, and programmable infrastructures, like software-defined networking or network function virtualisation, are becoming ubiquitously used in modern Information Technology infrastructures. These technologies change the characteristics and capabilities of the underlying computational substrate where services run (e.g. higher volatility, scarcer computational power, or programmability). As a consequence, the nature of the services that can be run on them changes too (smaller codebases, more fragmented state, etc.). These changes bring new requirements for service orchestrators, which need to evolve so as to support new scenarios where a close interaction between service and infrastructure becomes essential to deliver a seamless user experience. Here, we present the challenges brought forward by this new breed of technologies and where current orchestration techniques stand with regards to the new challenges. We also present a set of promising technologies that can help tame this brave new world.",
"title": ""
},
{
"docid": "981cbb9140570a6a6f3d4f4f49cd3654",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
},
{
"docid": "bb404a57964fcd5500006e039ba2b0dd",
"text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.",
"title": ""
}
] | scidocsrr |
c74a659d2827f50f182900e73c02ad44 | Mindfulness-based stress reduction for stress management in healthy people: a review and meta-analysis. | [
{
"docid": "b5360df245a0056de81c89945f581f14",
"text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.",
"title": ""
},
{
"docid": "6f0ffda347abfd11dc78c0b76ceb11f8",
"text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.",
"title": ""
},
{
"docid": "58359b7b3198504fa2475cc0f20ccc2d",
"text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.",
"title": ""
}
] | [
{
"docid": "ca6e39436be1b44ab0e20e0024cd0bbe",
"text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.",
"title": ""
},
{
"docid": "d0ec144c5239b532987157a64d499f61",
"text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.",
"title": ""
},
{
"docid": "75d9b0e67b57a8be7675854b19b50915",
"text": "In the paper, we describe analysis of Vivaldi antenna array aimed for microwave image application and SAR application operating at Ka band. The antenna array is fed by a SIW feed network for its low insertion loss and broadband performances in millimeter wave range. In our proposal we have replaced the large feed network by a simple relatively broadband network of compact size to reduce the losses in substrate integrated waveguide (SIW) and save space on PCB. The feed network is power 8-way divider fed by a wideband SIW-GCPW transition and directly connected to the antenna elements. The final antenna array will be designed, fabricated and obtained measured results will be compared with numerical ones.",
"title": ""
},
{
"docid": "108e4cc0358076fac20d7f9395c9f1e3",
"text": "This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.",
"title": ""
},
{
"docid": "cb4518f95b82e553b698ae136362bd59",
"text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the
eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:",
"title": ""
},
{
"docid": "919d1554ac7d18d5cb765c0ee808d3a6",
"text": "Pythium species were isolated from seedlings of strawberry with root and crown rot. The isolates were identified as P. helicoides on the basis of morphological characteristics and sequences of the ribosomal DNA internal transcribed spacer regions. In pathogenicity tests, the isolates caused root and crown rot similar to the original disease symptoms. Multiplex PCR was used to survey pathogen occurrence in strawberry production areas of Japan. Pythium helicoides was detected in 11 of 82 fields. The pathogen is distributed over six prefectures.",
"title": ""
},
{
"docid": "71b9722200c92901d8ec3c7e6195c931",
"text": "Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background \"noise.\" Thus, enterprises are seeking solutions to \"connect the suspicious dots\" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.",
"title": ""
},
{
"docid": "d2b5f28a7f32de167ec4c907472af90b",
"text": "Brain-computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.",
"title": ""
},
{
"docid": "fdc18ccdccefc1fd9c3f79daf549f015",
"text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.",
"title": ""
},
{
"docid": "44a5ea6fee136e66e1d89fb681f84805",
"text": "The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.",
"title": ""
},
{
"docid": "9a1665cff530d93c84598e7df947099f",
"text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.",
"title": ""
},
{
"docid": "2eefc7adc055f4fc1013199c38b0b91c",
"text": "Parametric methods are commonly used despite evidence that model assumptions are often violated. Various statistical procedures have been suggested for analyzing data from multiple-group repeated measures (i.e., split-plot) designs when parametric model assumptions are violated (e.g., Akritas and Arnold (J. Amer. Statist. Assoc. 89 (1994) 336); Brunner and Langer (Biometrical J. 42 (2000) 663)), including the use of Friedman ranks. The e8ects of Friedman ranking on data and the resultant test statistics for single sample repeated measures designs have been examined (e.g., Harwell and Serlin (Comput. Statist. Data Anal. 17 (1994) 35; Comm. Statist. Simulation Comput. 26 (1997) 605); Zimmerman and Zumbo (J. Experiment. Educ. 62 (1993) 75)). However, there have been fewer investigations concerning Friedman ranks applied to multiple groups of repeated measures data (e.g., Beasley (J. Educ. Behav. Statist. 25 (2000) 20); Rasmussen (British J. Math. Statist. Psych. 42 (1989) 91)). We investigate the use of Friedman ranks for testing the interaction in a split-plot design as a robust alternative to parametric procedures. We demonstrated that the presence of a repeated measures main e8ect may reduce the power of interaction tests performed on Friedman ranks. Aligning the data before applying Friedman ranks was shown to produce more statistical power than simply analyzing Friedman ranks. Results from a simulation study showed that aligning the data (i.e., removing main e8ects) before applying Friedman ranks and then performing either a univariate or multivariate test can provide more statistical power than parametric tests if the error distributions are skewed. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "3dfe5099c72f3ef3341c2d053ee0d2c2",
"text": "In this paper, the authors introduce a type of transverse flux reluctance machines. These machines work without permanent magnets or electric rotor excitation and hold several advantages, including a high power density, high torque, and compact design. Disadvantages are a high fundamental frequency and a high torque ripple that complicates the control of the motor. The device uses soft magnetic composites (SMCs) for the magnetic circuit, which allows complex stator geometries with 3-D magnetic flux paths. The winding is made from hollow copper tubes, which also form the main heat sink of the machine by using water as a direct copper coolant. Models concerning the design and computation of the magnetic circuit, torque, and the power output are described. A crucial point in this paper is the determination of hysteresis and eddy-current losses in the SMC and the calculation of power losses and current displacement in the copper winding. These are calculated with models utilizing a combination of analytic approaches and finite-element method simulations. Finally, a thermal model based on lumped parameters is introduced, and calculated temperature rises are presented.",
"title": ""
},
{
"docid": "b0fcde53d86560ce4d97145d2de2632d",
"text": "Silicon carbide (SiC) power devices have been investigated extensively in the past two decades, and there are many devices commercially available now. Owing to the intrinsic material advantages of SiC over silicon (Si), SiC power devices can operate at higher voltage, higher switching frequency, and higher temperature. This paper reviews the technology progress of SiC power devices and their emerging applications. The design challenges and future trends are summarized at the end of the paper.",
"title": ""
},
{
"docid": "279268e31da13abeed25b78062a71907",
"text": "Ridesharing platforms match drivers and riders to trips, using dynamic prices to balance supply and demand. A challenge is to set prices that are appropriately smooth in space and time, so that drivers will choose to accept their dispatched trips, rather than drive to another area or wait for higher prices or a better trip. We work in a complete information, discrete time, multiperiod, multi-location model, and introduce the Spatio-Temporal Pricing (STP) mechanism. The mechanism is incentive-aligned, in that it is a subgame-perfect equilibrium for drivers to accept their dispatches. The mechanism is also welfare-optimal, envy-free, individually rational, budget balanced and core-selecting from any history onward. The proof of incentive alignment makes use of the M ♮ concavity of min-cost flow objectives. We also give an impossibility result, that there can be no dominant-strategy mechanism with the same economic properties. An empirical analysis conducted in simulation suggests that the STP mechanism can achieve significantly higher social welfare than a myopic pricing mechanism.",
"title": ""
},
{
"docid": "1c7ca008292880e6f698d281a1f3d747",
"text": "Experimental evidence has pointed toward a negative effect of violent video games on social behavior. Given that the availability and presence of video games is pervasive, negative effects from playing them have potentially large implications for public policy. It is, therefore, important that violent video game effects are thoroughly and experimentally explored, with the current experiment focusing on prosocial behavior. 120 undergraduate volunteers (Mage = 19.01, 87.5% male) played an ultra-violent, violent, or non-violent video game and were then assessed on two distinct measures of prosocial behavior: how much they donated to a charity and how difficult they set a task for an ostensible participant. It was hypothesized that participants playing the ultra-violent games would show the least prosocial behavior and those playing the non-violent game would show the most. These hypotheses were not supported, with participants responding in similar ways, regardless of the type of game played. While null effects are difficult to interpret, samples of this nature (undergraduate volunteers, high male skew) may be problematic, and participants were possibly sensitive to the hypothesis at some level, this experiment adds to the growing body of evidence suggesting that violent video game effects are less clear than initially",
"title": ""
},
{
"docid": "109a84ad1c1a541e2a0b4972b21caca2",
"text": "Our brain is a network. It consists of spatially distributed, but functionally linked regions that continuously share information with each other. Interestingly, recent advances in the acquisition and analysis of functional neuroimaging data have catalyzed the exploration of functional connectivity in the human brain. Functional connectivity is defined as the temporal dependency of neuronal activation patterns of anatomically separated brain regions and in the past years an increasing body of neuroimaging studies has started to explore functional connectivity by measuring the level of co-activation of resting-state fMRI time-series between brain regions. These studies have revealed interesting new findings about the functional connections of specific brain regions and local networks, as well as important new insights in the overall organization of functional communication in the brain network. Here we present an overview of these new methods and discuss how they have led to new insights in core aspects of the human brain, providing an overview of these novel imaging techniques and their implication to neuroscience. We discuss the use of spontaneous resting-state fMRI in determining functional connectivity, discuss suggested origins of these signals, how functional connections tend to be related to structural connections in the brain network and how functional brain communication may form a key role in cognitive performance. Furthermore, we will discuss the upcoming field of examining functional connectivity patterns using graph theory, focusing on the overall organization of the functional brain network. Specifically, we will discuss the value of these new functional connectivity tools in examining believed connectivity diseases, like Alzheimer's disease, dementia, schizophrenia and multiple sclerosis.",
"title": ""
},
{
"docid": "06909d0ffbc52e14e0f6f1c9ffe29147",
"text": "DistributedLog is a high performance, strictly ordered, durably replicated log. It is multi-tenant, designed with a layered architecture that allows reads and writes to be scaled independently and supports OLTP, stream processing and batch workloads. It also supports a globally synchronous consistent replicated log spanning multiple geographically separated regions. This paper describes how DistributedLog is structured, its components and the rationale underlying various design decisions. We have been using DistributedLog in production for several years, supporting applications ranging from transactional database journaling, real-time data ingestion, and analytics to general publish-subscribe messaging.",
"title": ""
},
{
"docid": "9f6ab40fb1f1c331e72b275e3cf614e3",
"text": "The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.",
"title": ""
}
] | scidocsrr |
cfce53af4a6921ef254a17c119cbedf0 | Extending the road beyond CMOS - IEEE Circuits and Devices Magazine | [
{
"docid": "5706ae68d5e2b56679e0c89361fcc8b8",
"text": "Quantum computers promise to exceed the computational efficiency of ordinary classical machines because quantum algorithms allow the execution of certain tasks in fewer steps. But practical implementation of these machines poses a formidable challenge. Here I present a scheme for implementing a quantum-mechanical computer. Information is encoded onto the nuclear spins of donor atoms in doped silicon electronic devices. Logical operations on individual spins are performed using externally applied electric fields, and spin measurements are made using currents of spin-polarized electrons. The realization of such a computer is dependent on future refinements of conventional silicon electronics.",
"title": ""
}
] | [
{
"docid": "991e2e65cb6b47d8355e14d674272f2d",
"text": "In this paper, we develop a cooperative mechanism, RELICS, to combat selfishness in DTNs. In DTNs, nodes belong to self-interested individuals. A node may be selfish in expending resources, such as energy, on forwarding messages from others, unless offered incentives. We devise a rewarding scheme that provides incentives to nodes in a physically realizable way in that the rewards are reflected into network operation. We call it in-network realization of incentives. We introduce explicit ranking of nodes depending on their transit behavior, and translate those ranks into message priority. Selfishness drives each node to set its energy depletion rate as low as possible while maintaining its own delivery ratio above some threshold. We show that our cooperative mechanism compels nodes to cooperate and also achieves higher energy-economy compared to other previous results.",
"title": ""
},
{
"docid": "1c6677209ac3c37e4ac84b153321ab7c",
"text": "BACKGROUND\nAsthma guidelines indicate that the goal of treatment should be optimum asthma control. In a busy clinic practice with limited time and resources, there is need for a simple method for assessing asthma control with or without lung function testing.\n\n\nOBJECTIVES\nThe objective of this article was to describe the development of the Asthma Control Test (ACT), a patient-based tool for identifying patients with poorly controlled asthma.\n\n\nMETHODS\nA 22-item survey was administered to 471 patients with asthma in the offices of asthma specialists. The specialist's rating of asthma control after spirometry was also collected. Stepwise regression methods were used to select a subset of items that showed the greatest discriminant validity in relation to the specialist's rating of asthma control. Internal consistency reliability was computed, and discriminant validity tests were conducted for ACT scale scores. The performance of ACT was investigated by using logistic regression methods and receiver operating characteristic analyses.\n\n\nRESULTS\nFive items were selected from regression analyses. The internal consistency reliability of the 5-item ACT scale was 0.84. ACT scale scores discriminated between groups of patients differing in the specialist's rating of asthma control (F = 34.5, P <.00001), the need for change in patient's therapy (F = 40.3, P <.00001), and percent predicted FEV(1) (F = 4.3, P =.0052). As a screening tool, the overall agreement between ACT and the specialist's rating ranged from 71% to 78% depending on the cut points used, and the area under the receiver operating characteristic curve was 0.77.\n\n\nCONCLUSION\nResults reinforce the usefulness of a brief, easy to administer, patient-based index of asthma control.",
"title": ""
},
{
"docid": "486bd67781bb1067aa4ff6009cdeecb7",
"text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR = 4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR = 1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.",
"title": ""
},
{
"docid": "a53225746b2b6dba6078a998031c2af6",
"text": "Decision Tree induction is commonly used classification algorithm. One of the important problems is how to use records with unknown values from training as well as testing data. Many approaches have been proposed to address the impact of unknown values at training on accuracy of prediction. However, very few techniques are there to address the problem in testing data. In our earlier work, we discussed and summarized these strategies in details. In Lazy Decision Tree, the problem of unknown attribute values in test instance is completely eliminated by delaying the construction of tree till the classification time and using only known attributes for classification. In this paper we present novel algorithm ‘Eager Decision Tree’ which constructs a single prediction model at the time of training which considers all possibilities of unknown attribute values from testing data. It naturally removes the problem of handing unknown values in testing data in Decision Tree induction like Lazy Decision Tree.",
"title": ""
},
{
"docid": "c9171bf5a2638b35ff7dc9c8e6104d30",
"text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.",
"title": ""
},
{
"docid": "ef3b9dd6b463940bc57cdf7605c24b1e",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "06326f180f768b01e13d764c1171bdf3",
"text": "Recent advances in far-field fluorescence microscopy have led to substantial improvements in image resolution, achieving a near-molecular resolution of 20 to 30 nanometers in the two lateral dimensions. Three-dimensional (3D) nanoscale-resolution imaging, however, remains a challenge. We demonstrated 3D stochastic optical reconstruction microscopy (STORM) by using optical astigmatism to determine both axial and lateral positions of individual fluorophores with nanometer accuracy. Iterative, stochastic activation of photoswitchable probes enables high-precision 3D localization of each probe, and thus the construction of a 3D image, without scanning the sample. Using this approach, we achieved an image resolution of 20 to 30 nanometers in the lateral dimensions and 50 to 60 nanometers in the axial dimension. This development allowed us to resolve the 3D morphology of nanoscopic cellular structures.",
"title": ""
},
{
"docid": "bce0f6f9ca0697cb85bd07a118598aea",
"text": "The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting with tools changes the way we think and perceive -- tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than by seeing -- there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; (4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.",
"title": ""
},
{
"docid": "fb5a3c43655886c0387e63cd02fccd50",
"text": "Android is the most widely used smartphone OS with 82.8% market share in 2015 (IDC, 2015). It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost (Lindorfer et al., 2014). To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset (AndroCoverage, 2016). We show that it executes on average 13.52% more basic blocks than the Monkey program.",
"title": ""
},
{
"docid": "2f7d487059a77b582c3e0a33fd5d38af",
"text": "Disturbance regimes are changing rapidly, and the consequences of such changes for ecosystems and linked social-ecological systems will be profound. This paper synthesizes current understanding of disturbance with an emphasis on fundamental contributions to contemporary landscape and ecosystem ecology, then identifies future research priorities. Studies of disturbance led to insights about heterogeneity, scale, and thresholds in space and time and catalyzed new paradigms in ecology. Because they create vegetation patterns, disturbances also establish spatial patterns of many ecosystem processes on the landscape. Drivers of global change will produce new spatial patterns, altered disturbance regimes, novel trajectories of change, and surprises. Future disturbances will continue to provide valuable opportunities for studying pattern-process interactions. Changing disturbance regimes will produce acute changes in ecosystems and ecosystem services over the short (years to decades) and long-term (centuries and beyond). Future research should address questions related to (1) disturbances as catalysts of rapid ecological change, (2) interactions among disturbances, (3) relationships between disturbance and society, especially the intersection of land use and disturbance, and (4) feedbacks from disturbance to other global drivers. Ecologists should make a renewed and concerted effort to understand and anticipate the causes and consequences of changing disturbance regimes.",
"title": ""
},
{
"docid": "9ca12c5f314d077093753dc0f3ff9cd5",
"text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.",
"title": ""
},
{
"docid": "ccd7e49646f1ef1d31f033f84c63c6e6",
"text": "Language modeling is a prototypical unsupervised task of natural language processing (NLP). It has triggered the developments of essential bricks of models used in speech recognition, translation or summarization. More recently, language modeling has been shown to give a sensible loss function for learning high-quality unsupervised representations in tasks like text classification (Howard & Ruder, 2018), sentiment detection (Radford et al., 2017) or word vector learning (Peters et al., 2018) and there is thus a revived interest in developing better language models. More generally, improvement in sequential prediction models are believed to be beneficial for a wide range of applications like model-based planning or reinforcement learning whose models have to encode some form of memory.",
"title": ""
},
{
"docid": "14276adf4f5b3538f95cfd10902825ef",
"text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.",
"title": ""
},
{
"docid": "ceaa0ceb14034ecc2840425a627a3c71",
"text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.",
"title": ""
},
{
"docid": "26dc59c30371f1d0b2ff2e62a96f9b0f",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "e252e35a2869cdd5c06d8ba31a525f6a",
"text": "The conventional border patrol systems suffer from intensive human involvement. Recently, unmanned border patrol systems employ high-tech devices, such as unmanned aerial vehicles, unattended ground sensors, and surveillance towers equipped with camera sensors. However, any single technique encounters inextricable problems, such as high false alarm rate and line-of-sight-constraints. There lacks a coherent system that coordinates various technologies to improve the system accuracy. In this paper, the concept of BorderSense, a hybrid wireless sensor network architecture for border patrol systems, is introduced. BorderSense utilizes the most advanced sensor network technologies, including the wireless multimedia sensor networks and the wireless underground sensor networks. The framework to deploy and operate BorderSense is developed. Based on the framework, research challenges and open research issues are discussed. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "176d1eeb8dd1e366431d8ad4bb7734a1",
"text": "Online, reverse auctions are increasingly being utilized in industrial sourcing activities. This phenomenon represents a novel, emerging area of inquiry with significant implications for sourcing strategies. However, there is little systematic thinking or empirical evidence on the topic. In this paper, the use of these auctions in sourcing activities is reviewed and four key aspects are highlighted: (i) the differences from physical auctions or those of the theoretical literature, (ii) the conditions for using online, reverse auctions, (iii) methods for structuring the auctions, and (iv) evaluations of auction performance. Some empirical evidence on these issues is also provided. ONLINE, REVERSE AUCTIONS: ISSUES, THEMES, AND PROSPECTS FOR THE FUTURE INTRODUCTION For nearly the past decade, managers, analysts, researchers, and the business press have been remarking that, “The Internet will change everything.” And since the advent of the Internet, we have seen it challenge nearly every aspect of marketing practice. This raises the obligation to consider the consequences of the Internet to management practices, the theme of this special issue. Yet, it may take decades to fully understand the impact of the Internet on marketing practice, in general. This paper is one step in that direction. Specifically, I consider the impact of the Internet in a business-to-business context, the sourcing of direct and indirect materials from a supply base. It has been predicted that the Internet will bring about $1 trillion in efficiencies to the annual $7 trillion that is spent on the procurement of goods and services worldwide (USA Today, 2/7/00, B1). How and when this will happen remains an open question. However, one trend that is showing increasing promise is the use of online, reverse auctions. Virtually every major industry has begun to use and adopt these auctions on a regular basis (Smith 2002). During the late 1990s, slow-growth, manufacturing firms such as Boeing, SPX/Eaton, United Technologies, and branches of the United States military, utilized these auctions. Since then, consumer product companies such as Emerson Electronics, Nestle, and Quaker have followed suit. Even high-tech firms such as Dell, Hewlett-Packard, Intel, and Sun Microsystems have increased their usage of auctions in sourcing activities. And the intention and potential for the use of these auctions to continue to grow in the future is clear. In their annual survey of purchasing managers, Purchasing magazine found that 25% of its respondents expected to use reverse auctions in their sourcing efforts. Currently, the annual throughput in these auctions is estimated to be $40 billion; however, the addressable spend of the Global 500 firms is potentially $6.3 trillion.",
"title": ""
},
{
"docid": "6d066cec0c45a5504559ed40fc084d0e",
"text": "The combination of visual and inertial sensors has proved to be very popular in robot navigation and, in particular, Micro Aerial Vehicle (MAV) navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. In this paper, we propose a navigation algorithm for MAVs equipped with a single camera and an Inertial Measurement Unit (IMU) which is able to run onboard and in real-time. The main focus here is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40Hz on an onboard Atom computer 1.6 GHz.",
"title": ""
},
{
"docid": "ea278850f00c703bdd73957c3f7a71ce",
"text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.",
"title": ""
},
{
"docid": "954660a163fc8453368a6863d1c3fd85",
"text": "The application potential of very high resolution (VHR) remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.",
"title": ""
}
] | scidocsrr |
0d3a52c823dbc59c12b769b69a22700b | Top-down control of visual attention | [
{
"docid": "49717f07b8b4a3da892c1bb899f7a464",
"text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.",
"title": ""
}
] | [
{
"docid": "3efaaabf9a93460bace2e70abc71801d",
"text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.",
"title": ""
},
{
"docid": "f9de4041343fb6c570e5cbce4cb1ff66",
"text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.",
"title": ""
},
{
"docid": "ca117e9bfd90df7ac652628b342a4b62",
"text": "In this article, we introduce an explicit count-based strategy to build word space models with syntactic contexts (dependencies). A filtering method is defined to reduce explicit word-context vectors. This traditional strategy is compared with a neural embedding (predictive) model also based on syntactic dependencies. The comparison was performed using the same parsed corpus for both models. Besides, the dependency-based methods are also compared with bag-of-words strategies, both count-based and predictive ones. The results show that our traditional countbased model with syntactic dependencies outperforms other strategies, including dependency-based embeddings, but just for the tasks focused on discovering similarity between words with the same function (i.e. near-synonyms).",
"title": ""
},
{
"docid": "a7e0ff324e4bf4884f0a6e35adf588a3",
"text": "Named Entity Recognition (NER) is a subtask of information extraction and aims to identify atomic entities in text that fall into predefined categories such as person, location, organization, etc. Recent efforts in NER try to extract entities and link them to linked data entities. Linked data is a term used for data resources that are created using semantic web standards such as DBpedia. There are a number of online tools that try to identify named entities in text and link them to linked data resources. Although one can use these tools via their APIs and web interfaces, they use different data resources and different techniques to identify named entities and not all of them reveal this information. One of the major tasks in NER is disambiguation that is identifying the right entity among a number of entities with the same names; for example \"apple\" standing for both \"Apple, Inc.\" the company and the fruit. We developed a similar tool called NERSO, short for Named Entity Recognition Using Semantic Open Data, to automatically extract named entities, disambiguating and linking them to DBpedia entities. Our disambiguation method is based on constructing a graph of linked data entities and scoring them using a graph-based centrality algorithm. We evaluate our system by comparing its performance with two publicly available NER tools. The results show that NERSO performs better.",
"title": ""
},
{
"docid": "e7ecd827a48414f1f533fb30de203a6a",
"text": "Followership has been an understudied topic in the academic literature and an underappreciated topic among practitioners. Although it has always been important, the study of followership has become even more crucial with the advent of the information age and dramatic changes in the workplace. This paper provides a fresh look at followership by providing a synthesis of the literature and presents a new model for matching followership styles to leadership styles. The model’s practical value lies in its usefulness for describing how leaders can best work with followers, and how followers can best work with leaders.",
"title": ""
},
{
"docid": "8de0a71dd4d0e8b6874e80ffd5e45dd4",
"text": "Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (Littman et al., 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons.",
"title": ""
},
{
"docid": "61768befa972c8e9f46524a59c44fabb",
"text": "This paper presents a newly defined set-based concurrent engineering process, which the authors believe addresses some of the key challenges faced by engineering enterprises in the 21 century. The main principles of Set-Based Concurrent Engineering (SBCE) have been identified via an extensive literature review. Based on these principles the SBCE baseline model was developed. The baseline model defines the stages and activities which represent the product development process to be employed in the LeanPPD (lean product and process development) project. The LeanPPD project is addressing the needs of European manufacturing companies for a new model that extends beyond lean manufacturing, and incorporates lean thinking in the product design development process.",
"title": ""
},
{
"docid": "a0e0d3224cd73539e01f260d564109a7",
"text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.",
"title": ""
},
{
"docid": "900448785a5aa402165406daff206c93",
"text": "Electrospun membranes are gaining interest for use in membrane distillation (MD) due to their high porosity and interconnected pore structure; however, they are still susceptible to wetting during MD operation because of their relatively low liquid entry pressure (LEP). In this study, post-treatment had been applied to improve the LEP, as well as its permeation and salt rejection efficiency. The post-treatment included two continuous procedures: heat-pressing and annealing. In this study, annealing was applied on the membranes that had been heat-pressed. It was found that annealing improved the MD performance as the average flux reached 35 L/m2·h or LMH (>10% improvement of the ones without annealing) while still maintaining 99.99% salt rejection. Further tests on LEP, contact angle, and pore size distribution explain the improvement due to annealing well. Fourier transform infrared spectroscopy and X-ray diffraction analyses of the membranes showed that there was an increase in the crystallinity of the polyvinylidene fluoride-co-hexafluoropropylene (PVDF-HFP) membrane; also, peaks indicating the α phase of polyvinylidene fluoride (PVDF) became noticeable after annealing, indicating some β and amorphous states of polymer were converted into the α phase. The changes were favorable for membrane distillation as the non-polar α phase of PVDF reduces the dipolar attraction force between the membrane and water molecules, and the increase in crystallinity would result in higher thermal stability. The present results indicate the positive effect of the heat-press followed by an annealing post-treatment on the membrane characteristics and MD performance.",
"title": ""
},
{
"docid": "d66799a5d65a6f23527a33b124812ea6",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "dd9b6b67f19622bfffbad427b93a1829",
"text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.",
"title": ""
},
{
"docid": "b9652cf6647d9c7c1f91a345021731db",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "56934c400280e56dffbb27e6d06c21b9",
"text": "Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance .",
"title": ""
},
{
"docid": "08d5c83c7effa92659ea705ad51317e2",
"text": "This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In ",
"title": ""
},
{
"docid": "2d2d4d439021ee8665ddc3d97d879214",
"text": "We present the use of an oblique angle physical vapor deposition OAPVDd technique with substrate rotation to obtain conformal thin films with enhanced step coverage on patterned surfaces. We report the results of rutheniumsRud films sputter deposited on trench structures with aspect ratio ,2 and show that OAPVD with an incidence angle less that 30° with respect to the substrate surface normal one can create a more conformal coating without overhangs and voids compared to that obtained by normal incidence deposition. A simple geometrical shadowing effect is presented to explain the results. The technique has the potential of extending the present PVD technique to future chip interconnect fabrication. ©2005 American Institute of Physics . fDOI: 10.1063/1.1937476 g",
"title": ""
},
{
"docid": "30da5996ad883e41df979fe3640e35ed",
"text": "As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V's virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a\"GTA-V\"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.",
"title": ""
},
{
"docid": "5d879bdbf7667fa8ad19c3bb86219880",
"text": "The cellular concept applied in mobile communication systems enables significant increase of overall system capacity, but requires careful radio network planning and dimensioning. Wireless and mobile network operators typically rely on various commercial radio network planning and dimensioning tools, which incorporate different radio signal propagation models. In this paper we present the use of open-source Geographical Resources Analysis Support System (GRASS) for the calculation of radio signal coverage. We developed GRASS modules for radio coverage prediction for a number of different radio channel models, with antenna radiation patterns given in the standard MSI format. The results are stored in a data base (e.g. MySQL, PostgreSQL) for further processing and in a simplified form as a bit-map file for displaying in GRASS. The accuracy of prediction was confirmed by comparison with results obtained by a dedicated professional prediction tool as well as with measurement results. Key-Words: network planning tool, open-source, GRASS GIS, path loss, raster, clutter, radio signal coverage",
"title": ""
},
{
"docid": "d40a55317d8cdebfcd567ea11ad0960f",
"text": "This study examined the effects of self-presentation goals on the amount and type of verbal deception used by participants in same-gender and mixed-gender dyads. Participants were asked to engage in a conversation that was secretly videotaped. Self-presentational goal was manipulated, where one member of the dyad (the self-presenter) was told to either appear (a) likable, (b) competent, or (c) was told to simply get to know his or her partner (control condition). After the conversation, self-presenters were asked to review a video recording of the interaction and identify the instances in which they had deceived the other person. Overall, participants told more lies when they had a goal to appear likable or competent compared to participants in the control condition, and the content of the lies varied according to self-presentation goal. In addition, lies told by men and women differed in content, although not in quantity.",
"title": ""
},
{
"docid": "57a48dee2cc149b70a172ac5785afc6c",
"text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.",
"title": ""
}
] | scidocsrr |
b3657ac03c5a8b7ff7c08e358d39c2c4 | High-order Graph-based Neural Dependency Parsing | [
{
"docid": "497088def9f5f03dcb32e33d1b6fcb64",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "c7fa2e7615a2767ca39d951f1ecf835e",
"text": "We explore the application of neural language models to machine translation. We develop a new model that combines the neural probabilistic language model of Bengio et al., rectified linear units, and noise-contrastive estimation, and we incorporate it into a machine translation system both by reranking k-best lists and by direct integration into the decoder. Our large-scale, large-vocabulary experiments across four language pairs show that our neural language model improves translation quality by up to 1.1 Bleu.",
"title": ""
},
{
"docid": "062c970a14ac0715ccf96cee464a4fec",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] | [
{
"docid": "5ebb65f075fd00130e6684b86b9ab235",
"text": "While machine learning systems have recently achieved impressive, (super)human-level performance in several tasks, they have often relied on unnatural amounts of supervision – e.g. large numbers of labeled images or continuous scores in video games. In contrast, human learning is largely unsupervised, driven by observation and interaction with the world. Emulating this type of learning in machines is an open challenge, and one that is critical for general artificial intelligence. Here, we explore prediction of future frames in video sequences as an unsupervised learning rule. A key insight here is that in order to be able to predict how the visual world will change over time, an agent must have at least some implicit model of object structure and the possible transformations objects can undergo. To this end, we have designed several models capable of accurate prediction in complex sequences. Our first model consists of a recurrent extension to the standard autoencoder framework. Trained end-to-end to predict the movement of synthetic stimuli, we find that the model learns a representation of the underlying latent parameters of the 3D objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. In addition, we explore the use of an adversarial loss, as in a Generative Adversarial Network, illustrating its complementary effects to traditional pixel losses for the task of next-frame prediction.",
"title": ""
},
{
"docid": "5d775c669636860d7cbf987f1e998440",
"text": "Recent changes in the Music Encoding Initiative (MEI) have transformed it into an extensible platform from which new notation encoding schemes can be produced. This paper introduces MEI as a document-encoding framework, and illustrates how it can be extended to encode new types of notation, eliminating the need for creating specialized and potentially incompatible notation encoding standards.",
"title": ""
},
{
"docid": "d652a2ffb4708b76d8fa70d7a452ae9f",
"text": "If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user. c © 2006 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "3e128a5632f5ada623846f18e79444af",
"text": "Given the resources needed to launch a retail store on the Internet or change an existing online storefront design, it is important to allocate product development resources to interface features that actually improve store traffic and sales. We identified features that impact store traffic and sales using regression models of 1996 store traffic and dollar sales as dependent variables and interface design features such as number of links into the store, hours of promotional ads, number of products, and store navigation features as the independent variables. Product list navigation features that reduce the time to purchase products online account for 61% of the variance in monthly sales. Other factors explaining the variance in monthly sales include: number of hyperlinks into the store (10%), hours of promotion (4%) and customer service feedback (1%). These findings demonstrate that the user interface is an essential link between the customer and the retail store in Web-based shopping environments.",
"title": ""
},
{
"docid": "b2a8b979f4bd96a28746b090bca2a567",
"text": "Gradient-based policy search is an alternative to value-function-based methods for reinforcement learning in non-Markovian domains. One apparent drawback of policy search is its requirement that all actions be \\on-policy\"; that is, that there be no explicit exploration. In this paper, we provide a method for using importance sampling to allow any well-behaved directed exploration policy during learning. We show both theoretically and experimentally that using this method can achieve dramatic performance improvements. During this work, Nicolas Meuleau was at the MIT Arti cial Intelligence laboratory, supported in part by a research grant from NTT; Leonid Peshkin by grants from NSF and NTT; and Kee-Eung Kim in part by AFOSR/RLF 30602-95-1-0020.",
"title": ""
},
{
"docid": "1c6078d68891b6600727a82841812666",
"text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.",
"title": ""
},
{
"docid": "6b4efbb3572eeb09536e2ec82825f2fb",
"text": "Well-designed games are good motivators by nature, as they imbue players with clear goals and a sense of reward and fulfillment, thus encouraging them to persist and endure in their quests. Recently, this motivational power has started to be applied to non- game contexts, a practice known as Gamification. This adds gaming elements to non-game processes, motivating users to adopt new behaviors, such as improving their physical condition, working more, or learning something new. This paper describes an experiment in which game-like elements were used to improve the delivery of a Master's level College course, including scoring, levels, leaderboards, challenges and badges. To assess how gamification impacted the learning experience, we compare the gamified course to its non-gamified version from the previous year, using different performance measures. We also assessed student satisfaction as compared to other regular courses in the same academic context. Results were very encouraging, showing significant increases ranging from lecture attendance to online participation, proactive behaviors and perusing the course reference materials. Moreover, students considered the gamified instance to be more motivating, interesting and easier to learn as compared to other courses. We finalize by discussing the implications of these results on the design of future gamified learning experiences.",
"title": ""
},
{
"docid": "d5665efd0e4a91e9be4c84fecd5fd4ad",
"text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.",
"title": ""
},
{
"docid": "4fbde6cd9d511072680a4f20f6674acf",
"text": "A 50-year-old man developed numerous pustules and bullae on the trunk and limbs 15 days after anal fissure surgery. The clinicopathological diagnosis was iododerma induced by topical povidone-iodine sitz baths postoperatively. Complete resolution occurred within 3 weeks using systemic corticosteroids and forced diuresis.",
"title": ""
},
{
"docid": "fddf2c0ce952f3889207c05026c086ed",
"text": "How we design and evaluate for emotions depends crucially on what we take emotions to be. In affective computing, affect is often taken to be another kind of information discrete units or states internal to an individual that can be transmitted in a loss-free manner from people to computational systems and back. While affective computing explicitly challenges the primacy of rationality in cognitivist accounts of human activity, at a deeper level it often relies on and reproduces the same information-processing model of cognition. Drawing on cultural, social, and interactional critiques of cognition which have arisen in HCI, as well as anthropological and historical accounts of emotion, we explore an alternative perspective on emotion as interaction: dynamic, culturally mediated, and socially constructed and experienced. We demonstrate how this model leads to new goals for affective systems instead of sensing and transmitting emotion, systems should support human users in understanding, interpreting, and experiencing emotion in its full complexity and ambiguity. In developing from emotion as objective, externally measurable unit to emotion as experience, evaluation, too, alters focus from externally tracking the circulation of emotional information to co-interpreting emotions as they are made in interaction.",
"title": ""
},
{
"docid": "e97247d7b42875782164719ddf202a3c",
"text": "This work, set in the context of the apparel industry, proposes an action-oriented disclosure tool to help solve the sustainability challenges of complex fast-fashion supply chains (SCs). In a search for effective disclosure, it focusses on actions towards sustainability instead of the measurements and indicators of its impacts. We applied qualitative and quantitative content analysis to the sustainability reporting of the world’s two largest fast-fashion companies in three phases. First, we searched for the challenges that the organisations report they are currently facing. Second, we introduced the United Nations’ Sustainable Development Goals (SDGs) framework to overcome the voluntary reporting drawback of ‘choosing what to disclose’, and revealed orphan issues. This broadened the scope from internal corporate challenges to issues impacting the ecosystems in which companies operate. Third, we analysed the reported sustainability actions and decomposed them into topics, instruments, and actors. The results showed that fast-fashion reporting has a broadly developed analysis base, but lacks action orientation. This has led us to propose the ‘Fast-Fashion Sustainability Scorecard’ as a universal disclosure framework that shifts the focus from (i) reporting towards action; (ii) financial performance towards sustainable value creation; and (iii) corporate boundaries towards value creation for the broader SC ecosystem.",
"title": ""
},
{
"docid": "74bcc177a94ff57a847fb1677da5f032",
"text": "The resurgence of effort within computational semantics has led to increased interest in various types of relation extraction and semantic parsing. While various manually annotated resources exist for enabling this work, these materials have been developed with different standards and goals in mind. In an effort to develop better general understanding across these resources, we provide a summary overview of the standards underlying ACE, ERE, TAC-KBP Slot-filling, and FrameNet.",
"title": ""
},
{
"docid": "8e4bd52e3b10ea019241679541c25c9d",
"text": "Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.",
"title": ""
},
{
"docid": "b5c2e36e805f3ca96cde418137ed0239",
"text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.",
"title": ""
},
{
"docid": "52ab79410044bd29c11cdd8352d10a6e",
"text": "Fashion markets are synonymous with rapid change and, as a result, commercial success or failure in those markets is largely determined by the organisation’s flexibility and responsiveness. Responsiveness is characterised by short time-to-market, the ability to scale up (or down) quickly and the rapid incorporation of consumer preferences into the design process. In this paper it is argued that conventional organisational structures and forecast-driven supply chains are not adequate to meet the challenges of volatile and turbulent demand which typify fashion markets today. Instead, the requirement is for the creation of an agile organisation embedded within an agile supply chain INTRODUCTION Fashion markets have long attracted the interest of researchers. More often the focus of their work was the psychology and sociology of fashion and with the process by which fashions were adopted across populations (see for example Wills and Midgley, 1973). In parallel with this, a body of work has developed seeking to identify cycles in fashions (e.g. Carman, 1966). Much of this earlier work was intended to create insights and even tools to help improve the demand forecasting of fashion products. However, the reality that is now gradually being accepted both by those who work in the industry and those who study it, is that the demand for fashion products cannot be forecast. Instead, we need to recognise that fashion markets are complex open systems that frequently demonstrate high levels of ‘chaos’. In such conditions managerial effort may be better expended on devising strategies",
"title": ""
},
{
"docid": "9b71d11e2096008bc3603c62d89e452e",
"text": "Abstract In the present study biodiesel was synthesized from Waste Cook Oil (WCO) by three-step method and regressive analyzes of the process was done. The raw oil, containing 1.9wt% Free Fatty Acid (FFA) and viscosity was 47.6mm/s. WCO was collected from local restaurant of Sylhet city in Bangladesh. Transesterification method gives lower yield than three-step method. In the three-step method, the first step is saponification of the oil followed by acidification to produce FFA and finally esterification of FFA to produce biodiesel. In the saponification reaction, various reaction parameters such as oil to sodium hydroxide molar ratio and reaction time were optimized and the oil to NaOH molar ratio was 1:2, In the esterification reaction, the reaction parameters such as methanol to FFA molar ratio, catalyst concentration and reaction temperature were optimized. Silica gel was used during esterification reaction to adsorb water produced in the reaction. Hence the reaction rate was increased and finally the FFA was reduced to 0.52wt%. A factorial design was studied for esterification reaction based on yield of biodiesel. Finally various properties of biodiesel such as FFA, viscosity, specific gravity, cetane index, pour point, flash point etc. were measured and compared with biodiesel and petro-diesel standard. The reaction yield was 79%.",
"title": ""
},
{
"docid": "645a1ad9ab07eee096180e08e6f1fdff",
"text": "In the light of evidence from about 200 studies showing gender symmetry in perpetration of partner assault, research can now focus on why gender symmetry is predominant and on the implications of symmetry for primary prevention and treatment of partner violence. Progress in such research is handicapped by a number of problems: (1) Insufficient empirical research and a surplus of discussion and theory, (2) Blinders imposed by commitment to a single causal factor theory-patriarchy and male dominance-in the face of overwhelming evidence that this is only one of a multitude of causes, (3) Research purporting to investigate gender differences but which obtains data on only one gender, (4) Denial of research grants to projects that do not assume most partner violence is by male perpetrators, (5) Failure to investigate primary prevention and treatment programs for female offenders, and (6) Suppression of evidence on female perpetration by both researchers and agencies.",
"title": ""
},
{
"docid": "780095276d7ac3cae1b95b7a1ceee8b3",
"text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.",
"title": ""
},
{
"docid": "7fe99b63d2b3d94918e4b2f536053b1c",
"text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.",
"title": ""
},
{
"docid": "f5ce4a13a8d081243151e0b3f0362713",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] | scidocsrr |
6201489b4c017a2e9d506a20358f5dc2 | Meta-Unsupervised-Learning: A supervised approach to unsupervised learning | [
{
"docid": "3bb905351ce1ea2150f37059ed256a90",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "e4890b63e9a51029484354535765801c",
"text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.",
"title": ""
},
{
"docid": "fa984593899ca62025f54a7b4e7019c8",
"text": "Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic; the ground truth is really the unknown correct clustering of the data points and the real goal is to achieve low error on the data. In this work, we develop a theoretical approach to clustering from this perspective. In particular, motivated by recent work in learning theory that asks \"what natural properties of a similarity (or kernel) function are sufficient to be able to learn well?\" we ask \"what natural properties of a similarity function are sufficient to be able to cluster well?\"\n To study this question we develop a theoretical framework that can be viewed as an analog of the PAC learning model for clustering, where the object of study, rather than being a concept class, is a class of (concept, similarity function) pairs, or equivalently, a property the similarity function should satisfy with respect to the ground truth clustering. We then analyze both algorithmic and information theoretic issues in our model. While quite strong properties are needed if the goal is to produce a single approximately-correct clustering, we find that a number of reasonable properties are sufficient under two natural relaxations: (a) list clustering: analogous to the notion of list-decoding, the algorithm can produce a small list of clusterings (which a user can select from) and (b) hierarchical clustering: the algorithm's goal is to produce a hierarchy such that desired clustering is some pruning of this tree (which a user could navigate). We develop a notion of the clustering complexity of a given property (analogous to notions of capacity in learning theory), that characterizes its information-theoretic usefulness for clustering. We analyze this quantity for several natural game-theoretic and learning-theoretic properties, as well as design new efficient algorithms that are able to take advantage of them. Our algorithms for hierarchical clustering combine recent learning-theoretic approaches with linkage-style methods. We also show how our algorithms can be extended to the inductive case, i.e., by using just a constant-sized sample, as in property testing. The analysis here uses regularity-type results of [FK] and [AFKK].",
"title": ""
}
] | [
{
"docid": "ae8fde6c520fb4d1e18c4ff19d59a8d8",
"text": "Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.",
"title": ""
},
{
"docid": "a7d25265e939e484533bfd380a18502c",
"text": "Cloud computing is emerging as a viable platform for scientific exploration. Elastic and on-demand access to resources (and other services), the abstraction of “unlimited” resources, and attractive pricing models provide incentives for scientists to move their workflows into clouds. Generalizing these concepts beyond a single virtualized datacenter, it is possible to create federated marketplaces where different types of resources (e.g., clouds, HPC grids, supercomputers) that may be geographically distributed, are collectively exposed as a single elastic infrastructure. This presents opportunities for optimizing the execution of application workflows with heterogeneous and dynamic requirements, and tackling larger scale problems. In this paper, we introduce a framework to manage the end-to-end execution of data-intensive application workflows in dynamic software-defined resource federation. This framework enables the autonomic execution of workflows by elastically provisioning an appropriate set of resources that meet application requirements, and by adapting this set of resources at runtime as the requirements change. It also allows users to customize scheduling policies that drive the way resources federated and used. To demonstrate the benefits of our approach, we study the execution of two different data-intensive scientific workflows in a multi-cloud federation using different policies and objective functions.",
"title": ""
},
{
"docid": "799ccd75d6781e38cf5e2faee5784cae",
"text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"title": ""
},
{
"docid": "2fc1afae973ddd832afa92d27222ef09",
"text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.",
"title": ""
},
{
"docid": "c27aee0b72f3e8239915a8d33c060e96",
"text": "Advances in artificial impedance surface conformal antennas are presented. A detailed conical impedance modulation is proposed for the first time. By coating an artificial impedance surface on a cone, we can control the conical surface wave radiating at the desired direction. The surface impedance is constructed by printing a dense texture of sub wavelength metal patches on a grounded dielectric slab. The effective surface impedance depends on the size of the patches, and can be varied as a function of position. The final devices are conical conformal antennas with simple layout and feeding. Simulated results are presented, and better aperture efficiency and lower side lobe level are obtained than our predecessors [2].",
"title": ""
},
{
"docid": "1be5530691f5d0638a399adfc9b6bc36",
"text": "Nontechnical losses, particularly due to electrical theft, have been a major concern in power system industries for a long time. Large-scale consumption of electricity in a fraudulent manner may imbalance the demand-supply gap to a great extent. Thus, there arises the need to develop a scheme that can detect these thefts precisely in the complex power networks. So, keeping focus on these points, this paper proposes a comprehensive top-down scheme based on decision tree (DT) and support vector machine (SVM). Unlike existing schemes, the proposed scheme is capable enough to precisely detect and locate real-time electricity theft at every level in power transmission and distribution (T&D). The proposed scheme is based on the combination of DT and SVM classifiers for rigorous analysis of gathered electricity consumption data. In other words, the proposed scheme can be viewed as a two-level data processing and analysis approach, since the data processed by DT are fed as an input to the SVM classifier. Furthermore, the obtained results indicate that the proposed scheme reduces false positives to a great extent and is practical enough to be implemented in real-time scenarios.",
"title": ""
},
{
"docid": "955376cf6d04373c407987613d1c2bd1",
"text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.",
"title": ""
},
{
"docid": "b6fa1ee8c2f07b34768a78591c33bbbe",
"text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).",
"title": ""
},
{
"docid": "36dde22c25339790e7c011ca5e8677e4",
"text": "Land surface temperature and emissivity (LST&E) products are generated by the Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on the National Aeronautics and Space Administration's Terra satellite. These products are generated at different spatial, spectral, and temporal resolutions, resulting in discrepancies between them that are difficult to quantify, compounded by the fact that different retrieval algorithms are used to produce them. The highest spatial resolution MODIS emissivity product currently produced is from the day/night algorithm, which has a spatial resolution of 5 km. The lack of a high-spatial-resolution emissivity product from MODIS limits the usefulness of the data for a variety of applications and limits utilization with higher resolution products such as those from ASTER. This paper aims to address this problem by using the ASTER Temperature Emissivity Separation (TES) algorithm, combined with an improved atmospheric correction method, to generate the LST&E products for MODIS at 1-km spatial resolution and for ASTER in a consistent manner. The rms differences between the ASTER and MODIS emissivities generated from TES over the southwestern U.S. were 0.013 at 8.6 μm and 0.0096 at 11 μm, with good correlations of up to 0.83. The validation with laboratory-measured sand samples from the Algodones and Kelso Dunes in CA showed a good agreement in spectral shape and magnitude, with mean emissivity differences in all bands of 0.009 and 0.010 for MODIS and ASTER, respectively. These differences are equivalent to approximately 0.6 K in the LST for a material at 300 K and at 11 μm.",
"title": ""
},
{
"docid": "be502c3ea5369f31293f691bca6df775",
"text": "Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches. In order to overcome these limitations we designed and realized the Augmented Round Table, a new approach to support complex design and planning decisions for architects. While AR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitive interaction mechanisms that can be easily configured for different application scenarios.",
"title": ""
},
{
"docid": "6ba91269b707f64d2a45729161f44807",
"text": "The article is related to the development of techniques for automatic recognition of bird species by their sounds. It has been demonstrated earlier that a simple model of one time-varying sinusoid is very useful in classification and recognition of typical bird sounds. However, a large class of bird sounds are not pure sinusoids but have a clear harmonic spectrum structure. We introduce a way to classify bird syllables into four classes by their harmonic structure.",
"title": ""
},
{
"docid": "11b05bd0c0b5b9319423d1ec0441e8a7",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "b51021e995fc4be50028a0a152db7e7a",
"text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "41135401a2f04797ea2b4989065613bd",
"text": "With the rapid expansion of new available information presented to us online on a daily basis, text classification becomes imperative in order to classify and maintain it. Word2vec offers a unique perspective to the text mining community. By converting words and phrases into a vector representation, word2vec takes an entirely new approach on text classification. Based on the assumption that word2vec brings extra semantic features that helps in text classification, our work demonstrates the effectiveness of word2vec by showing that tf-idf and word2vec combined can outperform tf-idf because word2vec provides complementary features (e.g. semantics that tf-idf can't capture) to tf-idf. Our results show that the combination of word2vec weighted by tf-idf and tf-idf does not outperform tf-idf consistently. It is consistent enough to say the combination of the two can outperform either individually.",
"title": ""
},
{
"docid": "bfdf6e8e98793388dcf8f13b7147faf0",
"text": "Recently, Long Term Evolution (LTE) has developed a femtocell for indoor coverage extension. However, interference problem between the femtocell and the macrocell should be solved in advance. In this paper, we propose an interference management scheme in the LTE femtocell systems using Fractional Frequency Reuse (FFR). Under the macrocell allocating frequency band by the FFR, the femtocell chooses sub-bands which are not used in the macrocell sub-area to avoid interference. Simulation results show that proposed scheme enhances total/edge throughputs and reduces the outage probability in overall network, especially for the cell edge users.",
"title": ""
},
{
"docid": "4a098609770618240fbaebbbc891883d",
"text": "We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1",
"title": ""
},
{
"docid": "0eb61ddeca941e34b40bfe3e58b70497",
"text": "This article surveys the literature on analyses of mobile traffic collected by operators within their network infrastructure. This is a recently emerged research field, and, apart from a few outliers, relevant works cover the period from 2005 to date, with a sensible densification over the last three years. We provide a thorough review of the multidisciplinary activities that rely on mobile traffic datasets, identifying major categories and sub-categories in the literature, so as to outline a hierarchical classification of research lines. When detailing the works pertaining to each class, we balance a comprehensive view of state-of-the-art results with punctual focuses on the methodological aspects. Our approach provides a complete introductory guide to the research based on mobile traffic analysis. It allows summarizing the main findings of the current state-of-the-art, as well as pinpointing important open research directions.",
"title": ""
},
{
"docid": "9e3263866208bbc6a9019b3c859d2a66",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "d1237eb5ebdfafac5a80215868dee206",
"text": "Multipath is exploited to image targets that are hidden due to lack of line of sight (LOS) path in urban environments. Urban radar scenes include building walls, therefore creating reflections causing multipath returns. Conventional processing via synthetic aperture beamforming algorithms do not detect or localize the target at its true position. To remove these limitations, two multipath exploitation techniques to image a hidden target at its true location are presented under the assumptions that the locations of the reflecting walls are known and that the target multipath is resolvable and detectable. The first technique directly operates on the radar returns, whereas the second operates on the traditional beamformed image. Both these techniques mitigate the false alarms arising from the multipath while simultaneously permitting the shadowed target to be detected at its true location. While these techniques are general, they are examined for two important urban radar applications: detecting shadowed targets in an urban canyon, and detecting shadowed targets around corners.",
"title": ""
},
{
"docid": "5b7930de475b6f83f8333439fd0f9c3b",
"text": "Cloud applications are increasingly built from a mixture of runtime technologies. Hosted functions and service-oriented web hooks are among the most recent ones which are natively supported by cloud platforms. They are collectively referred to as serverless computing by application engineers due to the transparent on-demand instance activation and microbilling without the need to provision infrastructure explicitly. This half-day tutorial explains the use cases for serverless computing and the drivers and existing software solutions behind the programming and deployment model also known as Function-as-a-Service in the overall cloud computing stack. Furthermore, it presents practical open source tools for deriving functions from legacy code and for the management and execution of functions in private and public clouds.",
"title": ""
}
] | scidocsrr |
799d0d9f3135a816fa864421c1a62204 | Towards Creation of a Corpus for Argumentation Mining the Biomedical Genetics Research Literature | [
{
"docid": "5f7adc28fab008d93a968b6a1e5ad061",
"text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.",
"title": ""
}
] | [
{
"docid": "85e4a8dc8f27c5b73d147a36cace80d4",
"text": "REQUIRED) In this paper, we present a social/behavioral study of individual information security practices of internet users in Latin America, specifically presenting the case of Bolivia. The research model uses social cognitive theory in order to explain the individual cognitive factors that influence information security behavior. The model includes individuals’ beliefs about their abilities to competently use computer information security tools and information security awareness in the determination of effective information security practices. The operationalization of constructs that are part of our research model, such as information security practice as the dependent variable, self-efficacy and information security awareness as independent variables , are presented both in Spanish and English. In this study, we offer the analysis of a survey of 255 Internet users from Bolivia who replied to our survey and provided responses about their information security behavior. A discussion about information security awareness and practices is presented.",
"title": ""
},
{
"docid": "fdfea6d3a5160c591863351395929a99",
"text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.",
"title": ""
},
{
"docid": "f2707d7fcd5d8d9200d4cc8de8ff1042",
"text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.",
"title": ""
},
{
"docid": "1de1631bb0da37f2c3ddd856fcdbb0f1",
"text": "J.E. Dietrich (ed.), Female Puberty: A Comprehensive Guide for Clinicians, DOI 10.1007/978-1-4939-0912-4_2, © Springer Science+Business Media New York 2014 Abstract The development of a female child into an adult woman is a complex process. Puberty, and the hormones that fuel the physical and psychological changes which are its hallmarks, is generally viewed as a rough and often unpredictable storm that must be weathered by the surrounding adults. The more we learn, however, about the intricate interplay between the endocrine regulators and the endorgan responses to this hormonal symphony, puberty seems less like chaos, and more of an incredible metamorphosis that leads to reproductive capacity and psychosocial maturation. Physically, female puberty is marked by accelerated growth and the development of secondary sexual characteristics. Secondary sexual characteristics are those that distinguish two different sexes in a species, but are not directly part of the reproductive system. Analogies from the animal kingdom include manes in male lions and the elaborate tails of male peacocks. The visible/external sequence of events is generally: breast budding (thelarche), onset of pubic hair (pubarche), maximal growth velocity, menarche, development of axillary hair, attainment of the adult breast type, adult pubic hair pattern. Underlying these external developments is the endocrine axis orchestrating the increase in gonadal steroid production (gonadarche), the increase in adrenal androgen production (adrenarche) and the associated changes in the reproductive tract that allow fertility. Meanwhile, the brain is rapidly adapting to the new hormonal milieu. The extent of variation in this scenario is enormous. On average, the process from accelerated growth and breast budding to menarche is approximately 4.5 years with a range from 1.5 to 6 years. There are differences in timing and expression of maturation based on ethnicity, geography, and genetics. Being familiar with the spectrum that encompasses normal development is Chapter 2 Normal Pubertal Physiology in Females",
"title": ""
},
{
"docid": "3564941b9e2bcbd43a464bd8a2385311",
"text": "Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.",
"title": ""
},
{
"docid": "3b903b284e6a7bfb54113242b1143ddc",
"text": "Hypertension — the chronic elevation of blood pressure — is a major human health problem. In most cases, the root cause of the disease remains unknown, but there is mounting evidence that many forms of hypertension are initiated and maintained by an elevated sympathetic tone. This review examines how the sympathetic tone to cardiovascular organs is generated, and discusses how elevated sympathetic tone can contribute to hypertension.",
"title": ""
},
{
"docid": "92ae99edf23f41ffcf2f1b091132ac3c",
"text": "Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC. Physical computation offers the opportunity to reduce the cost of sampling by building physical systems whose natural dynamics correspond to drawing samples from the desired RBM distribution. Such a system avoids the burn-in and mixing cost of a Markov chain. However, hardware implementations of this variety usually entail limitations such as low-precision and limited range of the parameters and restrictions on the size and topology of the RBM. We conduct software simulations to determine how harmful each of these restrictions is. Our simulations are based on the D-Wave Two computer, but the issues we investigate arise in most forms of physical computation. Our findings suggest that designers of new physical computing hardware and algorithms for physical computers should focus their efforts on overcoming the limitations imposed by the topology restrictions of currently existing physical computers.",
"title": ""
},
{
"docid": "76cef1b6d0703127c3ae33bcf71cdef8",
"text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering",
"title": ""
},
{
"docid": "fb09d91b8e572cc9d0179f14bdd74b53",
"text": "Being grateful has been associated with many positive outcomes, including greater happiness, positive affect, optimism, and self-esteem. There is limited research, however, on the associations between gratitude and different domains of life satisfaction across cultures. The current study examined the associations between gratitude and three domains of life satisfaction, including satisfaction in relationships, work, and health, and overall life satisfaction, in the United States and Japan. A total of 945 participants were drawn from two samples of middle aged and older adults, the Midlife Development in the United States and the Midlife Development in Japan. There were significant positive bivariate associations between gratitude and all four measures of life satisfaction. In addition, after adjusting for demographics, neuroticism, extraversion, and the other measures of satisfaction, gratitude was uniquely and positively associated with satisfaction with relationships and life overall but not with satisfaction with work or health. Furthermore, results indicated that women and individuals who were more extraverted and lived in the United States were more grateful and individuals with less than a high school degree were less grateful. The findings from this study suggest that gratitude is uniquely associated with specific domains of life satisfaction. Results are discussed with respect to future research and the design and implementation of gratitude interventions, particularly when including individuals from different cultures.",
"title": ""
},
{
"docid": "6f370d729b8e8172b218071af89af7ad",
"text": "In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.",
"title": ""
},
{
"docid": "e4000835f1870399c4270492fb81694b",
"text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.",
"title": ""
},
{
"docid": "cbe3a584e8fcabbd42f732b5fe247736",
"text": "Wall‐climbing welding robots (WCWRs) can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot’s ability in relation to adsorbing, moving flexibly and obstacle‐passing. In this paper, a novel non‐contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot’s axils and the sucker and wall are in non‐contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker’s magnetic energy density (MED) is maximized by optimizing the magnet sucker’s structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall‐climbing work within a large unstructured environment.",
"title": ""
},
{
"docid": "c61877099eddc31a281fa82fd942072e",
"text": "The trend of bring your own device (BYOD) has been rapidly adopted by organizations. Despite the pros and cons of BYOD adoption, this trend is expected to inevitably keep increasing. Yet, BYOD has raised significant concerns about information system security as employees use their personal devices to access organizational resources. This study aims to examine employees' intention to comply with an organization’s IS security policy in the context of BYOD. We derived our research model from reactance, protection motivation and organizational justice theories. The results of this study demonstrate that an employee’s perceived response efficacy and perceived justice positively affect an employee’s intention to comply with BYOD security policy. Perceived security threat appraisal was found to marginally promote the intention to comply. Conversely, perceived freedom threat due to imposed security policy negatively affects an employee’s intention to comply with the security policy. We also found that an employee’s perceived cost associated with compliance behavior positively affects an employee’s perceptions of threat to an individual freedom. An interesting double-edged sword effect of a security awareness program was confirmed by the results. BYOD security awareness program increases an employee’s response efficacy (a positive effect) and response cost (a negative effect). The study also demonstrates the importance of having an IT support team for BYOD, as it increases an employee’s response-efficacy and perceived justice.",
"title": ""
},
{
"docid": "96c1da4e4b52014e4a9c5df098938c98",
"text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.",
"title": ""
},
{
"docid": "faca51b6762e4d7c3306208ad800abd3",
"text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.",
"title": ""
},
{
"docid": "0e6fd08318cf94ea683892d737ae645a",
"text": "We present simulations and demonstrate experimentally a new concept in winding a planar induction heater. The winding results in minimal ac magnetic field below the plane of the heater, while concentrating the flux above. Ferrites and other types of magnetic shielding are typically not required. The concept of a one-sided ac field can generalized to other geometries as well.",
"title": ""
},
{
"docid": "6893ce06d616d08cf0a9053dc9ea493d",
"text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.",
"title": ""
},
{
"docid": "36d79b2b2640d1b2ac7f8ef057abc75c",
"text": "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.",
"title": ""
},
{
"docid": "e82681b5140f3a9b283bbd02870f18d5",
"text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization",
"title": ""
},
{
"docid": "4d99090b874776b89092f63f21c8ea93",
"text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.",
"title": ""
}
] | scidocsrr |
945ba57676c8d5d5f087939aa6b5a6b5 | Obstacle detection with ultrasonic sensors and signal analysis metrics | [
{
"docid": "990c123bcc1bf3bbf2a42990ba724169",
"text": "This paper demonstrates an innovative and simple solution for obstacle detection and collision avoidance of unmanned aerial vehicles (UAVs) optimized for and evaluated with quadrotors. The sensors exploited in this paper are low-cost ultrasonic and infrared range finders, which are much cheaper though noisier than more expensive sensors such as laser scanners. This needs to be taken into consideration for the design, implementation, and parametrization of the signal processing and control algorithm for such a system, which is the topic of this paper. For improved data fusion, inertial and optical flow sensors are used as a distance derivative for reference. As a result, a UAV is capable of distance controlled collision avoidance, which is more complex and powerful than comparable simple solutions. At the same time, the solution remains simple with a low computational burden. Thus, memory and time-consuming simultaneous localization and mapping is not required for collision avoidance.",
"title": ""
}
] | [
{
"docid": "963f97c27adbc7d1136e713247e9a852",
"text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.",
"title": ""
},
{
"docid": "add026119d82ec730038fcc3521304c5",
"text": "Deep Learning has emerged as a new area in machine learning and is applied to a number of signal and image applications.The main purpose of the work presented in this paper, is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on various standard datasets, like remote sensing data of aerial images (UC Merced Land Use Dataset) and scene images from SUN database. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The graphical representation of the experimental results is given on the basis of MSE against the number of training epochs. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets.",
"title": ""
},
{
"docid": "6e675e8a57574daf83ab78cea25688f5",
"text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore âunsupervisedâ approaches to quality prediction that does not require labelled data. An alternate technique is to use âsupervisedâ approaches that learn models from project data labelled with, say, âdefectiveâ or ânot-defectiveâ. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSEâ16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.",
"title": ""
},
{
"docid": "bffddca72c7e9d6e5a8c760758a98de0",
"text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.",
"title": ""
},
{
"docid": "848f8efe11785c00e8e8af737d173d44",
"text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.",
"title": ""
},
{
"docid": "b3235d925a1f452ee5ed97cac709b9d4",
"text": "Xiaoming Zhai is a doctoral student in the Department of Physics, Beijing Normal University, and is a visiting scholar in the College of Education, University of Washington. His research interests include physics assessment and evaluation, as well as technology-supported physics instruction. He has been a distinguished high school physics teacher who won numerous nationwide instructional awards. Meilan Zhang is an instructor in the Department of Teacher Education at University of Texas at El Paso. Her research focuses on improving student learning using mobile technology, understanding Internet use and the digital divide using big data from Internet search trends and Web analytics. Min Li is an Associate Professor in the College of Education, University of Washington. Her expertise is science assessment and evaluation, and quantitative methods. Address for correspondence: Xiaoming Zhai, Department of Physics, Beijing Normal University, Room A321, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China. Email: [email protected]",
"title": ""
},
{
"docid": "2b23723ab291aeff31781cba640b987b",
"text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.",
"title": ""
},
{
"docid": "4bd7a933cf0d54a84c106a1591452565",
"text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.",
"title": ""
},
{
"docid": "b56a6fe9c9d4b45e9d15054004fac918",
"text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.",
"title": ""
},
{
"docid": "b54abd40f41235fa8e8cd4e9f42cd777",
"text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "63da0b3d1bc7d6aedd5356b8cdf67b24",
"text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.",
"title": ""
},
{
"docid": "1fcd6f0c91522a91fa05b0d969f8eec1",
"text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.",
"title": ""
},
{
"docid": "e048d73b37168c7b7ed46915e11b1bf0",
"text": "Creating graphic designs can be challenging for novice users. This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements. The system uses two distinct but complementary types of suggestions: refinement suggestions, which improve the current layout, and brainstorming suggestions, which change the style. We investigate two interfaces for interacting with suggestions. First, we develop a suggestive interface, where suggestions are previewed and can be accepted. Second, we develop an adaptive interface where elements move automatically to improve the layout. We compare both interfaces with a baseline without suggestions, and show that for novice designers, both interfaces produce significantly better layouts, as evaluated by other novices.",
"title": ""
},
{
"docid": "01202e09e54a1fc9f5b36d67fbbf3870",
"text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.",
"title": ""
},
{
"docid": "609997fbec79d71daa7c63e6fbbc6cc4",
"text": "Memory encoding occurs rapidly, but the consolidation of memory in the neocortex has long been held to be a more gradual process. We now report, however, that systems consolidation can occur extremely quickly if an associative \"schema\" into which new information is incorporated has previously been created. In experiments using a hippocampal-dependent paired-associate task for rats, the memory of flavor-place associations became persistent over time as a putative neocortical schema gradually developed. New traces, trained for only one trial, then became assimilated and rapidly hippocampal-independent. Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning. The concept of neocortical schemas may unite psychological accounts of knowledge structures with neurobiological theories of systems memory consolidation.",
"title": ""
},
{
"docid": "3e8f290f9d19996feb6551cde8815307",
"text": "Simplification of IT services is an imperative of the times we are in. Large legacy behemoths that exist at financial institutions are a result of years of patch work development on legacy landscapes that have developed in silos at various lines of businesses (LOBs). This increases costs -- for running financial services, changing the services as well as providing services to customers. We present here a basic guide to what constitutes complexity of IT landscape at financial institutions, what simplification means, and opportunities for simplification and how it can be carried out. We also explain a 4-phase approach to planning and executing Simplification of IT services at financial institutions.",
"title": ""
},
{
"docid": "526e36dd9e3db50149687ea6358b4451",
"text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "f45e43935de492d3598469cd24c48188",
"text": "Given a task of predicting Y from X , a loss function L, and a set of probability distributions Γ on (X,Y ), what is the optimal decision rule minimizing the worstcase expected loss over Γ? In this paper, we address this question by introducing a generalization of the maximum entropy principle. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal, we provide a minimax interpretation of the maximum likelihood problem over generalized linear models, which connects the minimax problem for each loss function to a generalized linear model. While in some cases such as quadratic and logarithmic loss functions we revisit well-known linear and logistic regression models, our approach reveals novel models for other loss functions. In particular, for the 0-1 loss we derive a classification approach which we call the minimax SVM. The minimax SVM minimizes the worst-case expected 0-1 loss over the proposed Γ by solving a tractable optimization problem. Moreover, applying the minimax approach to Brier loss function we derive a new classification model called the minimax Brier. The maximum likelihood problem for this model uses the Huber penalty function. We perform several numerical experiments to show the power of the minimax SVM and the minimax Brier.",
"title": ""
},
{
"docid": "00a3504c21cf0a971a717ce676d76933",
"text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.",
"title": ""
},
{
"docid": "625002b73c5e386989ddd243a71a1b56",
"text": "AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student's typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student's questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.",
"title": ""
}
] | scidocsrr |
ce6e3755a36ca41f25f5e9010fde0bbe | Perceived , not actual , similarity predicts initial attraction in a live romantic context : Evidence from the speed-dating paradigm | [
{
"docid": "241cd26632a394e5d922be12ca875fe1",
"text": "Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator.",
"title": ""
}
] | [
{
"docid": "aa2b1a8d0cf511d5862f56b47d19bc6a",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
},
{
"docid": "b9da9cc9d7583c5b72daf8a25a3145f5",
"text": "The purpose of this article is to review literature that is relevant to the social scientific study of ethics and leadership, as well as outline areas for future study. We first discuss ethical leadership and then draw from emerging research on \"dark side\" organizational behavior to widen the boundaries of the review to include ««ethical leadership. Next, three emerging trends within the organizational behavior literature are proposed for a leadership and ethics research agenda: 1 ) emotions, 2) fit/congruence, and 3) identity/ identification. We believe each shows promise in extending current thinking. The review closes with discussion of important issues that are relevant to the advancement of research on leadership and ethics. T IMPORTANCE OF LEADERSHIP in promoting ethical conduct in organizations has long been understood. Within a work environment, leaders set the tone for organizational goals and behavior. Indeed, leaders are often in a position to control many outcomes that affect employees (e.g., strategies, goal-setting, promotions, appraisals, resources). What leaders incentivize communicates what they value and motivates employees to act in ways to achieve such rewards. It is not surprising, then, that employees rely on their leaders for guidance when faced with ethical questions or problems (Treviño, 1986). Research supports this contention, and shows that employees conform to the ethical values of their leaders (Schminke, Wells, Peyrefitte, & Sabora, 2002). Furthermore, leaders who are perceived as ethically positive influence productive employee work behavior (Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009) and negatively influence counterproductive work behavior (Brown & Treviño, 2006b; Mayer et al., 2009). Recently, there has been a surge of empirical research seeking to understand the influence of leaders on building ethical work practices and employee behaviors (see Brown & Treviño, 2006a for a review). Initial theory and research (Bass & Steidlemeier, 1999; Brown, Treviño, & Harrison, 2005; Ciulla, 2004; Treviño, Brown, & Hartman, 2003; Treviño, Hartman, & Brown, 2000) sought to define ethical leadership from both normative and social scientific (descriptive) approaches to business ethics. The normative perspective is rooted in philosophy and is concerned with prescribing how individuals \"ought\" or \"should\" behave in the workplace. For example, normative scholarship on ethical leadership (Bass & Steidlemeier, 1999; Ciulla, 2004) examines ethical decision making from particular philosophical frameworks, evaluates the ethicality of particular leaders, and considers the degree to which certain styles of leadership or influence tactics are ethical. ©2010 Business Ethics Quarterly 20:4 (October 2010); ISSN 1052-150X pp. 583-616 584 BUSINESS ETHICS QUARTERLY In contrast, our article emphasizes a social scientific approach to ethical leadership (e.g.. Brown et al., 2005; Treviño et al., 2000; Treviño et al, 2003). This approach is rooted in disciplines such as psychology, sociology, and organization science, and it attempts to understand how people perceive ethical leadership and investigates the antecedents, outcomes, and potential boundary conditions of those perceptions. This research has focused on investigating research questions such as: What is ethical leadership (Brown et al., 2005; Treviño et al., 2003)? What traits are associated with perceived ethical leadership (Walumbwa & Schaubroeck, 2009)? How does ethical leadership flow through various levels of management within organizations (Mayer et al., 2009)? And, does ethical leadership help or hurt a leader's promotability within organizations (Rubin, Dierdorff, & Brown, 2010)? The purpose of our article is to review literature that is relevant to the descriptive study of ethics and leadership, as well as outhne areas for future empirical study. We first discuss ethical leadership and then draw from emerging research on what often is called \"dark\" (destructive) organizational behavior, so as to widen the boundaries of our review to also include ««ethical leadership. Next, we discuss three emerging trends within the organizational behavior literature—1) emotions, 2) fit/congruence, and 3) identity/identification—that we believe show promise in extending current thinking on the influence of leadership (both positive and negative) on organizational ethics. We conclude with a discussion of important issues that are relevant to the advancement of research in this domain. A REVIEW OF SOCIAL SCIENTIFIC ETHICAL LEADERSHIP RESEARCH The Concept of Ethical Leadership Although the topic of ethical leadership has long been considered by scholars, descriptive research on ethical leadership is relatively new. Some of the first formal investigations focused on defining ethical leadership from a descriptive perspective and were conducted by Treviño and colleagues (Treviño et al., 2000, 2003). Their qualitative research revealed that ethical leaders were best described along two related dimensions: moral person and moral manager. The moral person dimension refers to the qualities of the ethical leader as a person. Strong moral persons are honest and trustworthy. They demonstrate a concern for other people and are also seen as approachable. Employees can come to these individuals with problems and concerns, knowing that they will be heard. Moral persons have a reputation for being fair and principled. Lastly, riioral persons are seen as consistently moral in both their personal and professional lives. The moral manager dimension refers to how the leader uses the tools of the position of leadership to promote ethical conduct at work. Strong moral managers see themselves as role models in the workplace. They make ethics salient by modeling ethical conduct to their employees. Moral managers set and communicate ethical standards and use rewards and punishments to ensure those standards are followed. In sum, leaders who are moral managers \"walk the talk\" and \"talk the walk,\" patterning their behavior and organizational processes to meet moral standards. ETHICAL AND UNETHICAL LEADERSHIP 585 Treviño and colleagues (Treviño et al., 2000, 2003) argued that individuals in power must be both strong moral persons and moral managers in order to be seen as ethical leaders by those around them. Strong moral managers who are weak moral persons are likely to be seen as hypocrites, failing to practice what they preach. Hypocritical leaders talk about the importance of ethics, but their actions show them to be dishonest and unprincipled. Conversely, a strong moral person who is a weak moral manager runs the risk of being seen as an ethically \"neutral\" leader. That is, the leader is perceived as being silent on ethical issues, suggesting to employees that the leader does not really care about ethics. Subsequent research by Brown, Treviño, and Harrison (2005:120) further clarified the construct and provided a formal definition of ethical leadership as \"the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.\" They noted that \"the term normatively appropriate is 'deliberately vague'\" (Brown et al., 2005: 120) because norms vary across organizations, industries, and cultures. Brown et al. (2005) ground their conceptualization of ethical leadership in social learning theory (Bandura, 1977, 1986). This theory suggests individuals can learn standards of appropriate behavior by observing how role models (like teachers, parents, and leaders) behave. Accordingly, ethical leaders \"teach\" ethical conduct to employees through their own behavior. Ethical leaders are relevant role models because they occupy powerful and visible positions in organizational hierarchies that allow them to capture their follower's attention. They communicate ethical expectations through formal processes (e.g., rewards, policies) and personal example (e.g., interpersonal treatment of others). Effective \"ethical\" modeling, however, requires more than power and visibility. For social learning of ethical behavior to take place, role models must be credible in terms of moral behavior. By treating others fairly, honestly, and considerately, leaders become worthy of emulation by others. Otherwise, followers might ignore a leader whose behavior is inconsistent with his/her ethical pronouncements or who fails to interact with followers in a caring, nurturing style (Yussen & Levy, 1975). Outcomes of Ethical Leadership Researchers have used both social learning theory (Bandura, 1977,1986) and social exchange theory (Blau, 1964) to explain the effects of ethical leadership on important outcomes (Brown et al., 2005; Brown & Treviño, 2006b; Mayer et al , 2009; Walumbwa & Schaubroeck, 2009). According to principles of reciprocity in social exchange theory (Blau, 1964; Gouldner, 1960), individuals feel obligated to return beneficial behaviors when they believe another has been good and fair to them. In line with this reasoning, researchers argue and find that employees feel indebted to ethical leaders because of their trustworthy and fair nature; consequently, they reciprocate with beneficial work behavior (e.g., higher levels of ethical behavior and citizenship behaviors) and refrain from engaging in destructive behavior (e.g., lower levels of workplace deviance). 586 BUSINESS ETHICS QUARTERLY Emerging research has found that ethical leadership is related to important follower outcomes, such as employees' job satisfaction, organizational commitment, willingness to report problems to supervisors, willingness to put in extra effort on the job, voice behavior (i.e., expression of constructive suggestions intended to improve standard procedures), and perceptions of organizational culture and ethical climate (Brown et al., 2005; Neubert, Carlson, Kacmar, Roberts,",
"title": ""
},
{
"docid": "c09d2c25f112d9ecd10a8cf82e5ae6f0",
"text": "We propose a deontological approach to machine ethics that avoids some weaknesses of an intuition-based system, such as that of Anderson and Anderson. In particular, it has no need to deal with conflicting intuitions, and it yields a more satisfactory account of when autonomy should be respected. We begin with a “dual standpoint” theory of action that regards actions as grounded in reasons and therefore as having a conditional form that is suited to machine instructions. We then derive ethical principles based on formal properties that the reasons must exhibit to be coherent, and formulate the principles using quantified modal logic. We conclude that deontology not only provides a more satisfactory basis for machine ethics but endows the machine with an ability to explain its actions, thus contributing to transparency in AI.",
"title": ""
},
{
"docid": "5eed0c6f114382d868cd841c7b5d9986",
"text": "Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.",
"title": ""
},
{
"docid": "26f71c28c1346e80bac0e39d84e99206",
"text": "The objective of the article is to highlight various roles of glutamic acid like endogenic anticancer agent, conjugates to anticancer agents, and derivatives of glutamic acid as possible anticancer agents. Besides these emphases are given especially for two endogenous derivatives of glutamic acid such as glutamine and glutamate. Glutamine is a derivative of glutamic acid and is formed in the body from glutamic acid and ammonia in an energy requiring reaction catalyzed by glutamine synthase. It also possesses anticancer activity. So the transportation and metabolism of glutamine are also discussed for better understanding the role of glutamic acid. Glutamates are the carboxylate anions and salts of glutamic acid. Here the roles of various enzymes required for the metabolism of glutamates are also discussed.",
"title": ""
},
{
"docid": "642aff9bd8d12a33aa1696eb1bd829d8",
"text": "This paper presents the study on the semiconductor-based galvanic isolation. This solution delivers the differential-mode (DM) power via semiconductor power switches during their on states, while sustaining the common-mode (CM) voltage and blocking the CM leakage current with those switches during their off states. While it is impractical to implement this solution with Si devices, the latest SiC devices and the coming vertical GaN devices, however, provide unprecedented properties and thus can potentially enable the practical implementation. An isolated dc/dc converter based on the switched-capacitor circuit is studied as an example. The CM leakage current caused by the line input and the resulted touch current (TC) are quantified and compared to the limits in the safety standard IEC60950. To reduce the TC, low switch output capacitance and low converter switching frequency are needed. Then, discussions are presented on the TC reduction approaches and the design considerations to achieve high power density and high efficiency. A 400-V, 400-W prototype based on 1.7-kV SiC MOSFETs is built to demo the DM power delivery performance and showcase the CM leakage current problem. Further study on the CM leakage current elimination is needed to validate this solution.",
"title": ""
},
{
"docid": "531d387a14eefa6a8c45ad64039f29be",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
},
{
"docid": "12d31865b311f0ad88ef7dd694a2cfc1",
"text": "With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna.",
"title": ""
},
{
"docid": "e4a14229d3a10356f6b10ac0c19c8ec7",
"text": "The Programmer's Learning Machine (PLM) is an interactive exerciser for learning programming and algorithms. Using an integrated and graphical environment that provides a short feedback loop, it allows students to learn in a (semi)-autonomous way. This generic platform also enables teachers to create specific programming microworlds that match their teaching goals. This paper discusses our design goals and motivations, introduces the existing material and the proposed microworlds, and details the typical use cases from the student and teacher point of views.",
"title": ""
},
{
"docid": "10add5936202de7ee77bb3320fa0fbaa",
"text": "Maintaining the quality of roadways is a major challenge for governments around the world. In particular, poor road surfaces pose a significant safety threat to motorists, especially when motorbikes make up a significant portion of roadway traffic. According to the statistics of the Ministry of Justice in Taiwan, there were 220 claims for state compensation caused by road quality problems between 2005 to 2007, and the government paid a total of 113 million NTD in compensation. This research explores utilizing a mobile phone with a tri-axial accelerometer to collect acceleration data while riding a motorcycle. The data is analyzed to detect road anomalies and to evaluate road quality. Motorcycle-based acceleration data is collected on twelve stretches of road, with a data log spanning approximately three hours, and a total road length of about 60 kilometers. Both supervised and unsupervised machine learning methods are used to recognize road conditions. SVM learning is used to detect road anomalies and to identify their corresponding positions from labeled acceleration data. This method of road anomaly detection achieves a precision of 78.5%. Furthermore, to construct a model of smooth roads, unsupervised learning is used to learn anomaly thresholds by clustering data collected from the accelerometer. The results are used to rank the quality of the road segments in the experiment. We compare the ranked list from the learned evaluator with the ranked list from human evaluators who rode along the same roadways during the test phase. Based on the Kendall tau rank correlation coefficient, the automatically ranked result exhibited excellent performance. Keywords-mobile device; machine learning; accelerometer; road surface anomaly; pothole;",
"title": ""
},
{
"docid": "9547ec27942f9439d18dbfecdda83e1c",
"text": "Inverted pendulum system is a complicated, unstable and multivariable nonlinear system. In order to control the angle and displacement of inverted pendulum system effectively, a novel double-loop digital PID control strategy is presented in this paper. Based on impulse transfer function, the model of the single linear inverted pendulum system is divided into two parts according to the controlled parameters. The inner control loop that is formed by the digital PID feedback control can control the angle of the pendulum, while in order to control the cart displacement, the digital PID series control is adopted to form the outer control loop. The simulation results show the digital control strategy is very effective to single inverted pendulum and when the sampling period is selected as 50 ms, the performance of the digital control system is similar to that of the analog control system. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "e70425a0b9d14ff4223f3553de52c046",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "fb11348b48f65a4d3101727308a1f4fc",
"text": "Spin-transfer torque random access memory (STT-RAM) has emerged as an attractive candidate for future nonvolatile memories. It advantages the benefits of current state-of-the-art memories including high-speed read operation (of static RAM), high density (of dynamic RAM), and nonvolatility (of flash memories). However, the write operation in the 1T-1MTJ STT-RAM bitcell is asymmetric and stochastic, which leads to high energy consumption and long latency. In this paper, a new write assist technique is proposed to terminate the write operation immediately after switching takes place in the magnetic tunneling junction (MTJ). As a result, both the write time and write energy consumption of 1T-1MTJ bitcells improves. Moreover, the proposed write assist technique leads to an error-free write operation. The simulation results using a 65-nm CMOS access transistor and a 40-nm MTJ technology confirm that the proposed write assist technique results in three orders of magnitude improvement in bit error rate compared with the best existing techniques. Moreover, the proposed write assist technique leads to 81% energy saving compared with a cell without write assist and adds only 9.6% area overhead to a 16-kbit STT-RAM array.",
"title": ""
},
{
"docid": "ad00ba810df4c7295b89640c64b50e51",
"text": "Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of participants made a commission error, the erroneous repetition of a PM response following intention completion. Comparisons between the PM groups and control group suggested that commission errors occurred in the absence of preparatory monitoring. Response time analyses additionally suggested that some participants experienced fatigue across the ongoing task block, and those who did were more susceptible to making a commission error. These results supported the hypothesis that commission errors can arise from the spontaneous retrieval of finished intentions and possibly the failure to exert executive control to oppose the PM response.",
"title": ""
},
{
"docid": "2d7a13754631206203d6618ab2a27a76",
"text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.",
"title": ""
},
{
"docid": "c2875f69b6a5d51f3fb3f3cf4ad0f346",
"text": "Cancer cells often have characteristic changes in metabolism. Cellular proliferation, a common feature of all cancers, requires fatty acids for synthesis of membranes and signaling molecules. Here, we provide a view of cancer cell metabolism from a lipid perspective, and we summarize evidence that limiting fatty acid availability can control cancer cell proliferation.",
"title": ""
},
{
"docid": "ec1f47a6ca0edd2334fc416d29ce02ea",
"text": "We present Synereo, a next-gen decentralized and distributed social network designed for an attention economy. Our presentation is given in two chapters. Chapter 1 presents our design philosophy. Our goal is to make our users more effective agents by presenting social content that is relevant and actionable based on the user’s own estimation of value. We discuss the relationship between attention, value, and social agency in order to motivate the central mechanisms for content flow on the network. Chapter 2 defines a network model showing the mechanics of the network interactions, as well as the compensation model enabling users to promote content on the network and receive compensation for attention given to the network. We discuss the high-level technical implementation of these concepts based on the π-calculus the most well known of a family of computational formalisms known as the mobile process calculi. 0.1 Prologue: This is not a manifesto The Internet is overflowing with social network manifestos. Ello has a manifesto. Tsu has a manifesto. SocialSwarm has a manifesto. Even Disaspora had a manifesto. Each one of them is written in earnest with clear intent (see figure 1). Figure 1: Ello manifesto The proliferation of these manifestos and the social networks they advertise represents an important market shift, one that needs to be understood in context. The shift from mainstream media to social media was all about “user generated content”. In other words, people took control of the content by making it for and distributing it to each other. In some real sense it was a remarkable expansion of the shift from glamrock to punk and DIY; and like that movement, it was the sense of people having a say in what impressions they received that has been the underpinning of the success of Facebook and Twitter and YouTube and the other social media giants. In the wake of that shift, though, we’ve seen that even when the people are producing the content, if the service is in somebody else’s hands then things still go wonky: the service providers run psychology experiments via the social feeds [1]; they sell people’s personally identifiable and other critical info [2]; and they give data to spooks [3]. Most importantly, they do this without any real consent of their users. With this new wave of services people are expressing a desire to take more control of the service, itself. When the service is distributed, as is the case with Splicious and Diaspora, it is truly cooperative. And, just as with the music industry, where the technology has reached the point that just about anybody can have a professional studio in their home, the same is true with media services. People are recognizing that we don’t need big data centers with massive environmental impact, we need engagement at the level of the service, itself. If this really is the underlying requirement the market is articulating, then there is something missing from a social network that primarily serves up a manifesto with their service. While each of the networks mentioned above constitutes an important step in the right direction, they lack any clear indication",
"title": ""
},
{
"docid": "e63a5af56d8b20c9e3eac658940413ce",
"text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.",
"title": ""
},
{
"docid": "c0a8acf5741567077c8e7dc188033bc4",
"text": "The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.",
"title": ""
},
{
"docid": "581e3373ecfbc6c012df7c166636cc50",
"text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.",
"title": ""
}
] | scidocsrr |
a69534aff3e44a8641428e4ddbe1de14 | Tensor decomposition of EEG signals: A brief review | [
{
"docid": "ffc36fa0dcc81a7f5ba9751eee9094d7",
"text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.",
"title": ""
}
] | [
{
"docid": "e90e2a651c54b8510efe00eb1d8e7be0",
"text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna",
"title": ""
},
{
"docid": "94aa0777f80aa25ec854f159dc3e0706",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "a7de62c78f1286e66fd35145f3163f1c",
"text": "A particularly insidious type of concurrency bug is atomicity violations. While there has been substantial work on automatic detection of atomicity violations, each existing technique has focused on a certain type of atomic region. To address this limitation, this paper presents Atom Tracker, a comprehensive approach to atomic region inference and violation detection. Atom Tracker is the first scheme to (1) automatically infer generic atomic regions (not limited by issues such as the number of variables accessed, the number of instructions included, or the type of code construct the region is embedded in) and (2) automatically detect violations of them at runtime with negligible execution overhead. Atom Tracker provides novel algorithms to infer generic atomic regions and to detect atomicity violations of them. Moreover, we present a hardware implementation of the violation detection algorithm that leverages cache coherence state transitions in a multiprocessor. In our evaluation, we take eight atomicity violation bugs from real-world codes like Apache, MySql, and Mozilla, and show that Atom Tracker detects them all. In addition, Atom Tracker automatically infers all of the atomic regions in a set of micro benchmarks accurately. Finally, we also show that the hardware implementation induces a negligible execution time overhead of 0.2–4.0% and, therefore, enables Atom Tracker to find atomicity violations on-the-fly in production runs.",
"title": ""
},
{
"docid": "4acc30bade98c1257ab0a904f3695f3d",
"text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.",
"title": ""
},
{
"docid": "139b3dae4713a5bcff97e1b209bd3206",
"text": "Utilizing parametric and nonparametric techniques, we assess the role of a heretofore relatively unexplored ‘input’ in the educational process, homework, on academic achievement. Our results indicate that homework is an important determinant of student test scores. Relative to more standard spending related measures, extra homework has a larger and more significant impact on test scores. However, the effects are not uniform across different subpopulations. Specifically, we find additional homework to be most effective for high and low achievers, which is further confirmed by stochastic dominance analysis. Moreover, the parametric estimates of the educational production function overstate the impact of schooling related inputs. In all estimates, the homework coefficient from the parametric model maps to the upper deciles of the nonparametric coefficient distribution and as a by-product the parametric model understates the percentage of students with negative responses to additional homework. JEL: C14, I21, I28",
"title": ""
},
{
"docid": "d18ed4c40450454d6f517c808da7115a",
"text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.",
"title": ""
},
{
"docid": "e2b42351d30b2b1938497c6fdab68135",
"text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies the detected road signs. This paper presents an automatic neural-network-based road sign recognition system. First, a study of the existing road sign recognition research is presented. In this study, the issues associated with automatic road sign recognition are described, the existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given. Second, the developed road sign recognition system is described. The system is capable of analysing live colour road scene images, detecting multiple road signs within each image, and classifying the type of road signs detected. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space, and then detects road signs using a Multi-layer Perceptron neural-network. The classification module determines the type of detected road signs using a series of one to one architectural Multi-layer Perceptron neural networks. Two sets of classifiers are trained using the Resillient-Backpropagation and Scaled-Conjugate-Gradient algorithms. The two modules of the system are evaluated individually first. Then the system is tested as a whole. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 95.96% using the scaled-conjugate-gradient trained classifiers.",
"title": ""
},
{
"docid": "97b7065942b53f2d873c80f32242cd00",
"text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.",
"title": ""
},
{
"docid": "025d4933b4cc199366ffbff7cf51aea6",
"text": "An increase in pulsatile release of LHRH is essential for the onset of puberty. However, the mechanism controlling the pubertal increase in LHRH release is still unclear. In primates the LHRH neurosecretory system is already active during the neonatal period but subsequently enters a dormant state in the juvenile/prepubertal period. Neither gonadal steroid hormones nor the absence of facilitatory neuronal inputs to LHRH neurons is responsible for the low levels of LHRH release before the onset of puberty in primates. Recent studies suggest that during the prepubertal period an inhibitory neuronal system suppresses LHRH release and that during the subsequent maturation of the hypothalamus this prepubertal inhibition is removed, allowing the adult pattern of pulsatile LHRH release. In fact, y-aminobutyric acid (GABA) appears to be an inhibitory neurotransmitter responsible for restricting LHRH release before the onset of puberty in female rhesus monkeys. In addition, it appears that the reduction in tonic GABA inhibition allows an increase in the release of glutamate as well as other neurotransmitters, which contributes to the increase in pubertal LHRH release. In this review, developmental changes in several neurotransmitter systems controlling pulsatile LHRH release are extensively reviewed.",
"title": ""
},
{
"docid": "4e5661631557563430a82b4685ef6aa3",
"text": "Cloud Computing (CC) is fast becoming well known in the computing world as the latest technology. CC enables users to use resources as and when they are required. Mobile Cloud Computing (MCC) is an integration of the concept of cloud computing within a mobile environment, which removes barriers linked to the mobile devices' performance. Nevertheless, these new benefits are not problem-free entirely. Several common problems encountered by MCC are privacy, personal data management, identity authentication, and potential attacks. The security issues are a major hindrance in the mobile cloud computing's adaptability. This study begins by presenting the background of MCC including the various definitions, infrastructures, and applications. In addition, the current challenges and opportunities will be presented including the different approaches that have been adapted in studying MCC.",
"title": ""
},
{
"docid": "7f2dff96e9c1742842fea6a43d17f93e",
"text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.",
"title": ""
},
{
"docid": "ef7c3f93851f77274f4d2b9557e572d6",
"text": "In today’s world most of us depend on Social Media to communicate, express our feelings and share information with our friends. Social Media is the medium where now a day’s people feel free to express their emotions. Social Media collects the data in structured and unstructured, formal and informal data as users do not care about the spellings and accurate grammatical construction of a sentence while communicating with each other using different social networking websites ( Facebook, Twitter, LinkedIn and YouTube). Gathered data contains sentiments and opinion of users which will be processed using data mining techniques and analyzed for achieving the meaningful information from it. Using Social media data we can classify the type of users by analysis of their posted data on the social web sites. Machine learning algorithms are used for text classification which will extract meaningful data from these websites. Here, in this paper we will discuss the different types of classifiers and their advantages and disadvantages.",
"title": ""
},
{
"docid": "0bf150f6cd566c31ec840a57d8d2fa55",
"text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.",
"title": ""
},
{
"docid": "36209810c1a842c871b639220ba63036",
"text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.",
"title": ""
},
{
"docid": "f9879c1592683bc6f3304f3937d5eee2",
"text": "Altered cell metabolism is a characteristic feature of many cancers. Aside from well-described changes in nutrient consumption and waste excretion, altered cancer cell metabolism also results in changes to intracellular metabolite concentrations. Increased levels of metabolites that result directly from genetic mutations and cancer-associated modifications in protein expression can promote cancer initiation and progression. Changes in the levels of specific metabolites, such as 2-hydroxyglutarate, fumarate, succinate, aspartate and reactive oxygen species, can result in altered cell signalling, enzyme activity and/or metabolic flux. In this Review, we discuss the mechanisms that lead to changes in metabolite concentrations in cancer cells, the consequences of these changes for the cells and how they might be exploited to improve cancer therapy.",
"title": ""
},
{
"docid": "34c41c33ce2cd7642cf29d8bfcab8a3f",
"text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.",
"title": ""
},
{
"docid": "78e631aceb9598767289c86ace415e2b",
"text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.",
"title": ""
},
{
"docid": "e1a4468ccd5305b5158c26b2160d04a6",
"text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.",
"title": ""
},
{
"docid": "425ee0a0dc813a3870af72ac02ea8bbc",
"text": "Although the mechanism of action of botulinum toxin (BTX) has been intensively studied, many unanswered questions remain regarding the composition and clinical properties of the two formulations of BTX currently approved for cosmetic use. In the first half of this review, these questions are explored in detail, with emphasis on the most pertinent and revelatory studies in the literature. The second half delineates most of the common and some not so common uses of BTX in the face and neck, stressing important patient selection and safety considerations. Complications from neurotoxins at cosmetic doses are generally rare and usually technique dependent.",
"title": ""
}
] | scidocsrr |
c7b3a675e2e93e6900bfba1fea945c7f | Grab 'n Run: Secure and Practical Dynamic Code Loading for Android Applications | [
{
"docid": "6ee601387e550e896b3a3938016b03f7",
"text": "Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. However, the security implications of this practice are less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at a better understanding of the security risks in this customization process. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touch screen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and helped the manufacturers fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
}
] | [
{
"docid": "328a3e05fac7d118a99afd6197dac918",
"text": "Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.",
"title": ""
},
{
"docid": "01e6823392427274c4bd50cc1bf6bf6c",
"text": "The neocortex has a high capacity for plasticity. To understand the full scope of this capacity, it is essential to know how neurons choose particular partners to form synaptic connections. By using multineuron whole-cell recordings and confocal microscopy we found that axons of layer V neocortical pyramidal neurons do not preferentially project toward the dendrites of particular neighboring pyramidal neurons; instead, axons promiscuously touch all neighboring dendrites without any bias. Functional synaptic coupling of a small fraction of these neurons is, however, correlated with the existence of synaptic boutons at existing touch sites. These data provide the first direct experimental evidence for a tabula rasa-like structural matrix between neocortical pyramidal neurons and suggests that pre- and postsynaptic interactions shape the conversion between touches and synapses to form specific functional microcircuits. These data also indicate that the local neocortical microcircuit has the potential to be differently rewired without the need for remodeling axonal or dendritic arbors.",
"title": ""
},
{
"docid": "490df7bfea3338d98cbc0bd945463606",
"text": "This study examined perceived coping (perceived problem-solving ability and progress in coping with problems) as a mediator between adult attachment (anxiety and avoidance) and psychological distress (depression, hopelessness, anxiety, anger, and interpersonal problems). Survey data from 515 undergraduate students were analyzed using structural equation modeling. Results indicated that perceived coping fully mediated the relationship between attachment anxiety and psychological distress and partially mediated the relationship between attachment avoidance and psychological distress. These findings suggest not only that it is important to consider attachment anxiety or avoidance in understanding distress but also that perceived coping plays an important role in these relationships. Implications for these more complex relations are discussed for both counseling interventions and further research.",
"title": ""
},
{
"docid": "588a4eccb49bf0edf45456319b6d8ee4",
"text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.",
"title": ""
},
{
"docid": "2ed43c3b8ea0997d334f48e012a357c9",
"text": "While recognized as a theoretical and practical concept for over 20 years, only now ransomware has taken centerstage as one of the most prevalent cybercrimes. Various reports demonstrate the enormous burden placed on companies, which have to grapple with the ongoing attack waves. At the same time, our strategic understanding of the threat and the adversarial interaction between organizations and cybercriminals perpetrating ransomware attacks is lacking. In this paper, we develop, to the best of our knowledge, the first gametheoretic model of the ransomware ecosystem. Our model captures a multi-stage scenario involving organizations from different industry sectors facing a sophisticated ransomware attacker. We place particular emphasis on the decision of companies to invest in backup technologies as part of a contingency plan, and the economic incentives to pay a ransom if impacted by an attack. We further study to which degree comprehensive industry-wide backup investments can serve as a deterrent for ongoing attacks.",
"title": ""
},
{
"docid": "1ae161787669032d143226b41a380a66",
"text": "Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs’ pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over stateof-the-art models. We will publish all source codes and datasets of this work on github. com for further research.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "9cddaea30d7dda82537c273e97bff008",
"text": "A low-offset latched comparator using new dynamic offset cancellation technique is proposed. The new technique achieves low offset voltage without pre-amplifier and quiescent current. Furthermore the overdrive voltage of the input transistor can be optimized to reduce the offset voltage of the comparator independent of the input common mode voltage. A prototype comparator has been fabricated in 90 nm 9M1P CMOS technology with 152 µm2. Experimental results show that the comparator achieves 3.8 mV offset at 1 sigma at 500 MHz operating, while dissipating 39 μW from a 1.2 V supply.",
"title": ""
},
{
"docid": "f47019a78ee833dcb8c5d15a4762ccf9",
"text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.",
"title": ""
},
{
"docid": "6514ddb39c465a8ca207e24e60071e7f",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "ad3147f3a633ec8612dc25dfde4a4f0c",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "bad98c6d356f2dd49ec50365276f0247",
"text": "In this paper we investigate the co-authorship graph obtained from all papers published at SIGMOD between 1975 and 2002. We find some interesting facts, for instance, the identity of the authors who, on average, are \"closest\" to all other authors at a given time. We also show that SIGMOD's co-authorship graph is yet another example of a small world---a graph topology which has received a lot of attention recently. A companion web site for this paper can be found at http://db.cs.ualberta.ca/coauthorship.",
"title": ""
},
{
"docid": "a4aab340255c068137d3b3a1daaf97b5",
"text": "We present here SEMILAR, a SEMantic simILARity toolkit. SEMILAR implements a number of algorithms for assessing the semantic similarity between two texts. It is available as a Java library and as a Java standalone application offering GUI-based access to the implemented semantic similarity methods. Furthermore, it offers facilities for manual semantic similarity annotation by experts through its component SEMILAT (a SEMantic simILarity Annotation Tool).",
"title": ""
},
{
"docid": "1e46143d47f5f221094d0bb09505be80",
"text": "Clinical Scenario: Patients who experience prolonged concussion symptoms can be diagnosed with postconcussion syndrome (PCS) when those symptoms persist longer than 4 weeks. Aerobic exercise protocols have been shown to be effective in improving physical and mental aspects of health. Emerging research suggests that aerobic exercise may be useful as a treatment for PCS, where exercise allows patients to feel less isolated and more active during the recovery process.\n\n\nCLINICAL QUESTION\nIs aerobic exercise more beneficial in reducing symptoms than current standard care in patients with prolonged symptoms or PCS lasting longer than 4 weeks? Summary of Key Findings: After a thorough literature search, 4 studies relevant to the clinical question were selected. Of the 4 studies, 1 study was a randomized control trial and 3 studies were case series. All 4 studies investigated aerobic exercise protocol as treatment for PCS. Three studies demonstrated a greater rate of symptom improvement from baseline assessment to follow-up after a controlled subsymptomatic aerobic exercise program. One study showed a decrease in symptoms in the aerobic exercise group compared with the full-body stretching group. Clinical Bottom Line: There is moderate evidence to support subsymptomatic aerobic exercise as a treatment of PCS; therefore, it should be considered as a clinical option for reducing PCS and prolonged concussion symptoms. A previously validated protocol, such as the Buffalo Concussion Treadmill test, Balke protocol, or rating of perceived exertion, as mentioned in this critically appraised topic, should be used to measure baseline values and treatment progression. Strength of Recommendation: Level C evidence exists that the aerobic exercise protocol is more effective than the current standard of care in treating PCS.",
"title": ""
},
{
"docid": "5c97711d149d6744e3ea6d070016cd39",
"text": "This paper presents a clock generator for a MIPI M-PHY serial link transmitter, which includes an ADPLL, a digitally controlled oscillator (DCO), a programmable multiplier, and the actual serial driver. The paper focuses on the design of a DCO and how to enhance the frequency resolution to diminish the quantization noise introduced by the frequency discretization. As a result, a 17-kHz DCO frequency tuning resolution is demonstrated. Furthermore, implementation details of a low-power programmable 1-to-2-or-4 frequency multiplier are elaborated. The design has been implemented in a 40-nm CMOS process. The measurement results verify that the circuit provides the MIPI clock data rates from 1.248 GHz to 5.83 GHz. The DCO and multiplier unit dissipates a maximum of 3.9 mW from a 1.1 V supply and covers a small die area of 0.012 mm2.",
"title": ""
},
{
"docid": "9a98e97bb786a0c57a68e4cf8e4fb7a8",
"text": "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency.",
"title": ""
},
{
"docid": "9809521909e01140c367dbfbf3a4aacd",
"text": "Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals. Existing methods for constructing housing indices are computed at a coarse spatial granularity, such as metropolitan regions, which can mask or distort price dynamics apparent in local markets, such as neighborhoods and census tracts. A challenge in moving to estimates at, for example, the census tract level is the scarcity of spatiotemporally localized house sales observations. Our work aims to address this challenge by leveraging observations from multiple census tracts discovered to have correlated valuation dynamics. Our proposed Bayesian nonparametric approach builds on the framework of latent factor models to enable a flexible, data-driven method for inferring the clustering of correlated census tracts. We explore methods for scalability and parallelizability of computations, yielding a housing valuation index at the level of census tract rather than zip code, and on a monthly basis rather than quarterly. Our analysis is provided on a large Seattle metropolitan housing dataset.",
"title": ""
},
{
"docid": "a0f8af71421d484cbebb550a0bf59a6d",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
},
{
"docid": "4765cc56ea91dc8835be233bc227ec62",
"text": "Recognizing plants is a vital problem especially for biologists, chemists, and environmentalists. Plant recognition can be performed by human experts manually but it is a time consuming and low-efficiency process. Automation of plant recognition is an important process for the fields working with plants. This paper presents an approach for plant recognition using leaf images. Shape and color features extracted from leaf images are used with k-Nearest Neighbor, Support Vector Machines, Naive Bayes, and Random Forest classification algorithms to recognize plant types. The presented approach is tested on 1897 leaf images and 32 kinds of leaves. The results demonstrated that success rate of plant recognition can be improved up to 96% with Random Forest method when both shape and color features are used.",
"title": ""
},
{
"docid": "44c0da7556c3fd5faacc7faf0d3692cf",
"text": "The study examined the etiology of individual differences in early drawing and of its longitudinal association with school mathematics. Participants (N = 14,760), members of the Twins Early Development Study, were assessed on their ability to draw a human figure, including number of features, symmetry, and proportionality. Human figure drawing was moderately stable across 6 months (average r = .40). Individual differences in drawing at age 4½ were influenced by genetic (.21), shared environmental (.30), and nonshared environmental (.49) factors. Drawing was related to later (age 12) mathematical ability (average r = .24). This association was explained by genetic and shared environmental factors that also influenced general intelligence. Some genetic factors, unrelated to intelligence, also contributed to individual differences in drawing.",
"title": ""
}
] | scidocsrr |
6cf17f7076502c1c982b5c3f6ae43bd3 | Gaussian Processes for Rumour Stance Classification in Social Media | [
{
"docid": "9ae491c47c20a746eb13f3370217a8fa",
"text": "The open structure of online social networks and their uncurated nature give rise to problems of user credibility and influence. In this paper, we address the task of predicting the impact of Twitter users based only on features under their direct control, such as usage statistics and the text posted in their tweets. We approach the problem as regression and apply linear as well as nonlinear learning methods to predict a user impact score, estimated by combining the numbers of the user’s followers, followees and listings. The experimental results point out that a strong prediction performance is achieved, especially for models based on the Gaussian Processes framework. Hence, we can interpret various modelling components, transforming them into indirect ‘suggestions’ for impact boosting.",
"title": ""
}
] | [
{
"docid": "fe2b8921623f3bcf7b8789853b45e912",
"text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.",
"title": ""
},
{
"docid": "dc23ec643882393b69adca86c944bef4",
"text": "This memo describes a snapshot of the reasoning behind a proposed new namespace, the Host Identity namespace, and a new protocol layer, the Host Identity Protocol (HIP), between the internetworking and transport layers. Herein are presented the basics of the current namespaces, their strengths and weaknesses, and how a new namespace will add completeness to them. The roles of this new namespace in the protocols are defined. The memo describes the thinking of the authors as of Fall 2003. The architecture may have evolved since. This document represents one stable point in that evolution of understanding.",
"title": ""
},
{
"docid": "8ea2dadd6024e2f1b757818e0c5d76fa",
"text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.",
"title": ""
},
{
"docid": "05b362c5dd31decd8d0d33ba45a36783",
"text": "Behavioral interventions preceded by a functional analysis have been proven efficacious in treating severe problem behavior associated with autism. There is, however, a lack of research showing socially validated outcomes when assessment and treatment procedures are conducted by ecologically relevant individuals in typical settings. In this study, interview-informed functional analyses and skill-based treatments (Hanley et al. in J Appl Behav Anal 47:16-36, 2014) were applied by a teacher and home-based provider in the classroom and home of two children with autism. The function-based treatments resulted in socially validated reductions in severe problem behavior (self-injury, aggression, property destruction). Furthermore, skills lacking in baseline-functional communication, denial and delay tolerance, and compliance with adult instructions-occurred with regularity following intervention. The generality and costs of the process are discussed.",
"title": ""
},
{
"docid": "39cf15285321c7d56904c8c59b3e1373",
"text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA",
"title": ""
},
{
"docid": "711ad6f6641b916f25f08a32d4a78016",
"text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "20def85748f9d2f71cd34c4f0ca7f57c",
"text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.",
"title": ""
},
{
"docid": "f5d8c506c9f25bff429cea1ed4c84089",
"text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.",
"title": ""
},
{
"docid": "100c152685655ad6865f740639dd7d57",
"text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"title": ""
},
{
"docid": "23a329c63f9a778e3ec38c25fa59748a",
"text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.",
"title": ""
},
{
"docid": "dc810b43c71ab591981454ad20e34b7a",
"text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.",
"title": ""
},
{
"docid": "f9c4f413618d94b78b96c8cb188e09c5",
"text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the column wise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our 1This work was supported in part by the Nanyang Assistant Professorship (M4080134), JSPSNTU joint project (M4080882), Natural Science Foundation of China (61105013), and National Science and Technology Pillar Program (2012BAI14B03). Part of this work was done when Yang Cong was a research fellow at NTU. Preprint submitted to Pattern Recognition January 30, 2013 method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.",
"title": ""
},
{
"docid": "7d32ed1dbd25e7845bf43f58f42be34a",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nSenna occidentalis, Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and Albizia schimperiana are traditionally used for treatment of various ailments including helminth infection in Ethiopia.\n\n\nMATERIALS AND METHODS\nIn vitro egg hatch assay and larval development tests were conducted to determine the possible anthelmintic effects of crude aqueous and hydro-alcoholic extracts of the leaves of Senna occidentalis, aerial parts of Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and stem bark of Albizia schimperiana on eggs and larvae of Haemonchus contortus.\n\n\nRESULTS\nBoth aqueous and hydro-alcoholic extracts of Leucas martinicensis, Leonotis ocymifolia and aqueous extract of Senna occidentalis and Albizia schimperiana induced complete inhibition of egg hatching at concentration less than or equal to 1mg/ml. Aqueous and hydro-alcoholic extracts of all tested medicinal plants have shown statistically significant and dose dependent egg hatching inhibition. Based on ED(50), the most potent extracts were aqueous and hydro-alcoholic extracts of Leucas martinicensis (0.09 mg/ml), aqueous extracts of Rumex abyssinicus (0.11 mg/ml) and Albizia schimperiana (0.11 mg/ml). Most of the tested plant extracts have shown remarkable larval development inhibition. Aqueous extracts of Leonotis ocymifolia, Leucas martinicensis, Albizia schimperiana and Senna occidentalis induced 100, 99.85, 99.31, and 96.36% inhibition of larval development, respectively; while hydro-alcoholic extracts of Albizia schimperiana induced 99.09 inhibition at the highest concentration tested (50mg/ml). Poor inhibition was recorded for hydro-alcoholic extracts of Senna occidentalis (9%) and Leonotis ocymifolia (37%) at 50mg/ml.\n\n\nCONCLUSIONS\nThe overall findings of the current study indicated that the evaluated medicinal plants have potential anthelmintic effect and further in vitro and in vivo evaluation is indispensable to make use of these plants.",
"title": ""
},
{
"docid": "f97093a848329227f363a8a073a6334a",
"text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.",
"title": ""
},
{
"docid": "bfde0c836406a25a08b7c95b330aaafa",
"text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "a8e665f8b7ea7473e5f7095d12db00ce",
"text": "Although there has been considerable progress in reducing cancer incidence in the United States, the number of cancer survivors continues to increase due to the aging and growth of the population and improvements in survival rates. As a result, it is increasingly important to understand the unique medical and psychosocial needs of survivors and be aware of resources that can assist patients, caregivers, and health care providers in navigating the various phases of cancer survivorship. To highlight the challenges and opportunities to serve these survivors, the American Cancer Society and the National Cancer Institute estimated the prevalence of cancer survivors on January 1, 2012 and January 1, 2022, by cancer site. Data from Surveillance, Epidemiology, and End Results (SEER) registries were used to describe median age and stage at diagnosis and survival; data from the National Cancer Data Base and the SEER-Medicare Database were used to describe patterns of cancer treatment. An estimated 13.7 million Americans with a history of cancer were alive on January 1, 2012, and by January 1, 2022, that number will increase to nearly 18 million. The 3 most prevalent cancers among males are prostate (43%), colorectal (9%), and melanoma of the skin (7%), and those among females are breast (41%), uterine corpus (8%), and colorectal (8%). This article summarizes common cancer treatments, survival rates, and posttreatment concerns and introduces the new National Cancer Survivorship Resource Center, which has engaged more than 100 volunteer survivorship experts nationwide to develop tools for cancer survivors, caregivers, health care professionals, advocates, and policy makers.",
"title": ""
},
{
"docid": "582b9c59e07922ae3d5b01309e030bba",
"text": "This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n2 logn) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.",
"title": ""
},
{
"docid": "00f8c6d7fd58f06fc2672443de9773b7",
"text": "The utility industry has invested widely in smart grid (SG) over the past decade. They considered it the future electrical grid while the information and electricity are delivered in two-way flow. SG has many Artificial Intelligence (AI) applications such as Artificial Neural Network (ANN), Machine Learning (ML) and Deep Learning (DL). Recently, DL has been a hot topic for AI applications in many fields such as time series load forecasting. This paper introduces the common algorithms of DL in the literature applied to load forecasting problems in the SG and power systems. The intention of this survey is to explore the different applications of DL that are used in the power systems and smart grid load forecasting. In addition, it compares the accuracy results RMSE and MAE for the reviewed applications and shows the use of convolutional neural network CNN with k-means algorithm had a great percentage of reduction in terms of RMSE.",
"title": ""
},
{
"docid": "81537ba56a8f0b3beb29a03ed3c74425",
"text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.",
"title": ""
},
{
"docid": "04abe3f22084ab74ed3db8cbda680f62",
"text": "Standard targets are typically used for structural (white-box) evaluation of fingerprint readers, e.g., for calibrating imaging components of a reader. However, there is no standard method for behavioral (black-box) evaluation of fingerprint readers in operational settings where variations in finger placement by the user are encountered. The goal of this research is to design and fabricate 3D targets for repeatable behavioral evaluation of fingerprint readers. 2D calibration patterns with known characteristics (e.g., sinusoidal gratings of pre-specified orientation and frequency, and fingerprints with known singular points and minutiae) are projected onto a generic 3D finger surface to create electronic 3D targets. A state-of-the-art 3D printer (Stratasys Objet350 Connex) is used to fabricate wearable 3D targets with materials similar in hardness and elasticity to the human finger skin. The 3D printed targets are cleaned using 2M NaOH solution to obtain evaluation-ready 3D targets. Our experimental results show that: 1) features present in the 2D calibration pattern are preserved during the creation of the electronic 3D target; 2) features engraved on the electronic 3D target are preserved during the physical 3D target fabrication; and 3) intra-class variability between multiple impressions of the physical 3D target is small. We also demonstrate that the generated 3D targets are suitable for behavioral evaluation of three different (500/1000 ppi) PIV/Appendix F certified optical fingerprint readers in the operational settings.",
"title": ""
}
] | scidocsrr |
ae34a2fbc651d06af28faf80b5c7721f | Motion Blur Kernel Estimation via Deep Learning | [
{
"docid": "3e8b5f71776ab38861412f26f58e972e",
"text": "Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.",
"title": ""
},
{
"docid": "04d190daef0abb78f3c4d85e23297fbc",
"text": "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.",
"title": ""
}
] | [
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "1ca7cf4fd64327b2eb77b7b3a3e37cc8",
"text": "The current study demonstrates the separability of spatial and verbal working memory resources among college students. In Experiment 1, we developed a spatial span task that taxes both the processing and storage components of spatial working memory. This measure correlates with spatial ability (spatial visualization) measures, but not with verbal ability measures. In contrast, the reading span test, a common test of verbal working memory, correlates with verbal ability measures, but not with spatial ability measures. Experiment 2, which uses an interference paradigm to cross the processing and storage demands of span tasks, replicates this dissociation and further demonstrates that both the processing and storage components of working memory tasks are important for predicting performance on spatial thinking and language processing tasks.",
"title": ""
},
{
"docid": "abb01393c17bf9e5dbb07952a80fd2ab",
"text": "We report a case of a 48-year-old male patient with “krokodil” drug-related osteonecrosis of both jaws. Patient history included 1.5 years of “krokodil” use, with 8-month drug withdrawal prior to surgery. The patient was HCV positive. On the maxilla, sequestrectomy was performed. On the mandible, sequestrectomy was combined with bone resection. From ramus to ramus, segmental defect was formed, which was not reconstructed with any method. Post-operative follow-up period was 3 years and no disease recurrence was noted. On 3-year post-operative orthopantomogram, newly formed mandibular bone was found. This phenomenon shows that spontaneous bone formation is possible after mandible segmental resection in osteonecrosis patients.",
"title": ""
},
{
"docid": "06da3a4efe9ef2f5978a84da09650659",
"text": "We present CryptoML, the first practical framework for provably secure and efficient delegation of a wide range of contemporary matrix-based machine learning (ML) applications on massive datasets. In CryptoML a delegating client with memory and computational resource constraints wishes to assign the storage and ML-related computations to the cloud servers, while preserving the privacy of its data. We first suggest the dominant components of delegation performance cost, and create a matrix sketching technique that aims at minimizing the cost by data pre-processing. We then propose a novel interactive delegation protocol based on the provably secure Shamir's secret sharing. The protocol is customized for our new sketching technique to maximize the client's resource efficiency. CryptoML shows a new trade-off between the efficiency of secure delegation and the accuracy of the ML task. Proof of concept evaluations corroborate applicability of CryptoML to datasets with billions of non-zero records.",
"title": ""
},
{
"docid": "7ff79a0701051f653257aefa2c3ba154",
"text": "As antivirus and network intrusion detection systems have increasingly proven insufficient to detect advanced threats, large security operations centers have moved to deploy endpoint-based sensors that provide deeper visibility into low-level events across their enterprises. Unfortunately, for many organizations in government and industry, the installation, maintenance, and resource requirements of these newer solutions pose barriers to adoption and are perceived as risks to organizations' missions. To mitigate this problem we investigated the utility of agentless detection of malicious endpoint behavior, using only the standard built-in Windows audit logging facility as our signal. We found that Windows audit logs, while emitting manageable sized data streams on the endpoints, provide enough information to allow robust detection of malicious behavior. Audit logs provide an effective, low-cost alternative to deploying additional expensive agent-based breach detection systems in many government and industrial settings, and can be used to detect, in our tests, 83% percent of malware samples with a 0.1% false positive rate. They can also supplement already existing host signature-based antivirus solutions, like Kaspersky, Symantec, and McAfee, detecting, in our testing environment, 78% of malware missed by those antivirus systems.",
"title": ""
},
{
"docid": "cf1967eaa2fe97a3de2b99aec0df27cb",
"text": "We present a high gain linearly polarized Ku-band planar array for mobile satellite TV reception. In contrast with previously presented three dimensional designs, the approach presented here results in a low profile planar array with a similar performance. The elevation scan is performed electronically, whereas the azimuth scan is done mechanically using an electric motor. The incident angle of the arriving satellite signal is generally large, varying between 25° to 65° depending on the location of the receiver, thereby creating a considerable off-axis scan loss. In order to alleviate this problem, and yet maintaining a planar design, the antenna array is designed to be consisting of subarrays with a fixed scanned beam at 45°. Therefore, the array of fixed-beam subarrays needs to be scanned ±20° around their peak beam, which results in a higher combined gain/directivity. The proposed antenna demonstrates the minimum measured gain of 23.1 dBi throughout the scan range (for 65° scan) with the peak gain of 26.5 dBi (for 32° scan) at 12 GHz while occupying a circular aperture of 26 cm in diameter.",
"title": ""
},
{
"docid": "5941a883218e22a06efd3bba1e851fc7",
"text": "Sparse data and irregular data access patterns are hugely important to many applications, such as molecular dynamics and data analytics. Accelerating applications with these characteristics requires maximizing usable bandwidth at all levels of the memory hierarchy, reducing latency, maximizing reuse of moved data, and minimizing the amount the data is moved in the first place. Many specialized data structures have evolved to meet these requisites for specific applications, however, there are no general solutions for improving the performance of sparse applications. The structure of the memory hierarchy itself, conspires against general hardware for accelerating sparse applications, being designed for efficient bulk transport of data versus one byte at a time. This paper presents a general solution for a programmable data rearrangement/reduction engine near-memory to deliver bulk byte-addressable data access. The key technology presented in this paper is the Sparse Data Reduction Engine (SPDRE), which builds previous similar efforts to provide a practical near-memory reorganization engine. In addition to the primary contribution, this paper describes a programmer interface that enables all combinations of rearrangement, analysis of the methodology on a small series of applications, and finally a discussion of future work.",
"title": ""
},
{
"docid": "76454b3376ec556025201a2f694e1f1c",
"text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.",
"title": ""
},
{
"docid": "79beaf249c8772ee1cbd535df0bf5a13",
"text": "Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper, we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called neighborhood estimator before filling, is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62% on the STARE dataset and 95.81% on the HRF dataset.",
"title": ""
},
{
"docid": "5bff5c54824d24b6ab72d01e0771db36",
"text": "Visual restoration and recognition are traditionally addressed in pipeline fashion, i.e. denoising followed by classification. Instead, observing correlations between the two tasks, for example clearer image will lead to better categorization and vice visa, we propose a joint framework for visual restoration and recognition for handwritten images, inspired by advances in deep autoencoder and multi-modality learning. Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model. Thus, visual restoration and classification can be unified using shared representation via non-linear mapping, and model parameters can be learnt via backpropagation. Using MNIST and USPS data corrupted with structured noise, the proposed framework performs at least 20% better in classification than separate pipelines, as well as clearer recovered images.",
"title": ""
},
{
"docid": "2a79464b8674b689239f4579043bd525",
"text": "In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage– retrieval stage–, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage–translation stage–, a novel translation model, called search engine guided NMT (SEG-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.",
"title": ""
},
{
"docid": "1df4fad2d5448364834608f4bc9d10a0",
"text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: [email protected] (L.N. Chaplin), [email protected] (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.",
"title": ""
},
{
"docid": "e4f648d12495a2d7615fe13c84f35bbe",
"text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.",
"title": ""
},
{
"docid": "24ecf1119592cc5496dc4994d463eabe",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "ce37f72aa7b1433cdb18af526c115138",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemes have been proposed but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. PACT allows quantizing activations to arbitrary bit precisions, while achieving much better accuracy relative to published state-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance due to a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",
"title": ""
},
{
"docid": "d4f953596e49393a4ca65e202eab725c",
"text": "This work integrates deep learning and symbolic programming paradigms into a unified method for deploying applications to a neuromorphic system. The approach removes the need for coordination among disjoint co-processors by embedding both types entirely on a neuromorphic processor. This integration provides a flexible approach for using each technique where it performs best. A single neuromorphic solution can seamlessly deploy neural networks for classifying sensor-driven noisy data obtained from the environment alongside programmed symbolic logic to processes the input from the networks. We present a concrete implementation of the proposed framework using the TrueNorth neuromorphic processor to play blackjack using a pre-programmed optimal strategy algorithm combined with a neural network trained to classify card images as input. Future extensions of this approach will develop a symbolic neuromorphic compiler for automatically creating networks from a symbolic programming language.",
"title": ""
},
{
"docid": "9270af032d1adbf9829e7d723ff76849",
"text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.",
"title": ""
},
{
"docid": "fc07af4d49f7b359e484381a0a88aff7",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "a56c98284e1ac38e9aa2e4aa4b7a87a9",
"text": "Background: The extrahepatic biliary tree with the exact anatomic features of the arterial supply observed by laparoscopic means has not been described heretofore. Iatrogenic injuries of the extrahepatic biliary tree and neighboring blood vessels are not rare. Accidents involving vessels or the common bile duct during laparoscopic cholecystectomy, with or without choledocotomy, can be avoided by careful dissection of Calot's triangle and the hepatoduodenal ligament. Methods: We performed 244 laparoscopic cholecystectomies over a 2-year period between January 1, 1995 and January 1, 1997. Results: In 187 of 244 consecutive cases (76.6%), we found a typical arterial supply anteromedial to the cystic duct, near the sentinel cystic lymph node. In the other cases, there was an atypical arterial supply, and 27 of these cases (11.1%) had no cystic artery in Calot's triangle. A typical blood supply and accessory arteries were observed in 18 cases (7.4%). Conclusion: Young surgeons who are not yet familiar with the handling of an anatomically abnormal cystic blood supply need to be more aware of the precise anatomy of the extrahepatic biliary tree.",
"title": ""
},
{
"docid": "aeb4af864a4e2435486a69f5694659dc",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
}
] | scidocsrr |
cc7875ac90d3a8b3bcd7eb0e7a7fa1df | FEDD: Feature Extraction for Explicit Concept Drift Detection in time series | [
{
"docid": "50d63f05e453468f8e5234910e3d86d1",
"text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: [email protected], gr203@i ic.ac.uk (N.M. Adams), [email protected] (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
}
] | [
{
"docid": "327bbbee0087e15db04780291ded9fe6",
"text": "Semantic Reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.",
"title": ""
},
{
"docid": "45cbfbe0a0bcf70910a6d6486fb858f0",
"text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.",
"title": ""
},
{
"docid": "a85496dc96f87ba4f0018ef8bb2c8695",
"text": "The negative capacitance (NC) of ferroelectric materials has paved the way for achieving sub-60-mV/decade switching feature in complementary metal-oxide-semiconductor (CMOS) field-effect transistors, by simply inserting a ferroelectric thin layer in the gate stack. However, in order to utilize the ferroelectric capacitor (as a breakthrough technique to overcome the Boltzmann limit of the device using thermionic emission process), the thickness of the ferroelectric layer should be scaled down to sub-10-nm for ease of integration with conventional CMOS logic devices. In this paper, we demonstrate an NC fin-shaped field-effect transistor (FinFET) with a 6-nm-thick HfZrO ferroelectric capacitor. The performance parameters of NC FinFET such as on-/off-state currents and subthreshold slope are compared with those of the conventional FinFET. Furthermore, a repetitive and reliable steep switching feature of the NC FinFET at various drain voltages is demonstrated.",
"title": ""
},
{
"docid": "7917c6d9a9d495190e5b7036db92d46d",
"text": "Background A precise understanding of the anatomical structures of the heart and great vessels is essential for surgical planning in order to avoid unexpected findings. Rapid prototyping techniques are used to print three-dimensional (3D) replicas of patients’ cardiovascular anatomy based on 3D clinical images such as MRI. The purpose of this study is to explore the use of 3D patient-specific cardiovascular models using rapid prototyping techniques to improve surgical planning in patients with complex congenital heart disease.",
"title": ""
},
{
"docid": "3fbbe02ff11faa5cf6d537d5bcb0e658",
"text": "This paper reports on a mixed-method research project that examined the attitudes of computer users toward accidental/naive information security (InfoSec) behaviour. The aim of this research was to investigate the extent to which attitude data elicited from repertory grid technique (RGT) interviewees support their responses collected via an online survey questionnaire. Twenty five university students participated in this two-stage project. Individual attitude scores were calculated for each of the research methods and were compared across seven behavioural focus areas using Spearman product-moment correlation coefficient. The two sets of data exhibited a small-to-medium correlation when individual attitudes were analysed for each of the focus areas. In summary, this exploratory research indicated that the two research approaches were reasonably complementary and the RGT interview results tended to triangulate the attitude scores derived from the online survey questionnaire, particularly in regard to attitudes toward Incident Reporting behaviour, Email Use behaviour and Social Networking Site Use behaviour. The results also highlighted some attitude items in the online questionnaire that need to be reviewed for clarity, relevance and non-ambiguity.",
"title": ""
},
{
"docid": "3bc7adca896ab0c18fd8ec9b8c5b3911",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: [email protected] (Zhi Liu), [email protected] (Chenyang Zhang), [email protected] (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "e7f8f8bd80b1366058f356d39af483b4",
"text": "To handle the colorization problem, we propose a deep patch-wise colorization model for grayscale images. Distinguished with some constructive color mapping models with complicated mathematical priors, we alternately apply two loss metric functions in the deep model to suppress the training errors under the convolutional neural network. To address the potential boundary artifacts, a refinement scheme is presented inspired by guided filtering. In the experiment section, we summarize our network parameters setting in practice, including the patch size, amount of layers and the convolution kernels. Our experiments demonstrate this model can output more satisfactory visual colorizations compared with the state-of-the-art methods. Moreover, we prove our method has extensive application domains and can be applied to stylistic colorization.",
"title": ""
},
{
"docid": "46d36fbc092f0f8e1e8154db1ad1f9de",
"text": "Multicarrier phase-based ranging is fast emerging as a cost-optimized solution for a wide variety of proximitybased applications due to its low power requirement, low hardware complexity and compatibility with existing standards such as ZigBee and 6LoWPAN. Given potentially critical nature of the applications in which phasebased ranging can be deployed (e.g., access control, asset tracking), it is important to evaluate its security guarantees. Therefore, in this work, we investigate the security of multicarrier phase-based ranging systems and specifically focus on distance decreasing relay attacks that have proven detrimental to the security of proximity-based access control systems (e.g., vehicular passive keyless entry and start systems). We show that phase-based ranging, as well as its implementations, are vulnerable to a variety of distance reduction attacks. We describe different attack realizations and verify their feasibility by simulations and experiments on a commercial ranging system. Specifically, we successfully reduced the estimated range to less than 3m even though the devices were more than 50 m apart. We discuss possible countermeasures against such attacks and illustrate their limitations, therefore demonstrating that phase-based ranging cannot be fully secured against distance decreasing attacks.",
"title": ""
},
{
"docid": "96d2a6082de66034759b521547e8c8d2",
"text": "Recent developments in deep convolutional neural networks (DCNNs) have shown impressive performance improvements on various object detection/recognition problems. This has been made possible due to the availability of large annotated data and a better understanding of the nonlinear mapping between images and class labels, as well as the affordability of powerful graphics processing units (GPUs). These developments in deep learning have also improved the capabilities of machines in understanding faces and automatically executing the tasks of face detection, pose estimation, landmark localization, and face recognition from unconstrained images and videos. In this article, we provide an overview of deep-learning methods used for face recognition. We discuss different modules involved in designing an automatic face recognition system and the role of deep learning for each of them. Some open issues regarding DCNNs for face recognition problems are then discussed. This article should prove valuable to scientists, engineers, and end users working in the fields of face recognition, security, visual surveillance, and biometrics.",
"title": ""
},
{
"docid": "7946e414908e2863ad0e2ba21dbee0be",
"text": "This paper presents a symbolic-execution-based approach and its implementation by POM/JLEC for checking the logical equivalence between two programs in the system replacement context. The primary contributions lie in the development of POM/JLEC, a fully automatic equivalence checker for Java enterprise systems. POM/JLEC consists of three main components: Domain Specific Pre-Processor for extracting the target code from the original system and adjusting it to a suitable scope for verification, Symbolic Execution for generating symbolic summaries, and solver-based EQuality comparison for comparing the symbolic summaries together and returning counter examples in the case of non-equivalence. We have evaluated POM/JLEC with a large-scale benchmark created from the function layer code of an industrial enterprise system. The evaluation result with 54% test cases passed shows the feasibility for deploying its mature version into software development industry.",
"title": ""
},
{
"docid": "064bb39aa50a484955cfde4f585f91d7",
"text": "Congenitally missing teeth are frequently presented to the dentist. Interdisciplinary approach may be needed for the proper treatment plan. The available treatment modalities to replace congenitally missing teeth include prosthodontic fixed and removable prostheses, resin bonded retainers, orthodontic movement of maxillary canine to the lateral incisor site and single tooth implants. Dental implants offer a promising treatment option for placement of congenitally missing teeth. Interdisciplinary approach may be needed in these cases. This article aims to present a case report of replacement of unilaterally congenitally missing maxillary lateral incisors with dental implants.",
"title": ""
},
{
"docid": "192663cdecdcfda1f86605adbc3c6a56",
"text": "With the introduction of IT to conduct business we accepted the loss of a human control step. For this reason, the introduction of new IT systems was accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.",
"title": ""
},
{
"docid": "87dd4ba33b9f4ae20d60097960047551",
"text": "Lacking the presence of human and social elements is claimed one major weakness that is hindering the growth of e-commerce. The emergence of social commerce (SC) might help ameliorate this situation. Social commerce is a new evolution of e-commerce that combines the commercial and social activities by deploying social technologies into e-commerce sites. Social commerce reintroduces the social aspect of shopping to e-commerce, increasing the degree of social presences in online environment. Drawing upon the social presence theory, this study theorizes the nature of social aspect in online SC marketplace by proposing a set of three social presence variables. These variables are then hypothesized to have positive impacts on trusting beliefs which in turn result in online purchase behaviors. The research model is examined via data collected from a typical ecommerce site in China. Our findings suggest that social presence factors grounded in social technologies contribute significantly to the building of the trustworthy online exchanging relationships. In doing so, this paper confirms the positive role of social aspect in shaping online purchase behaviors, providing a theoretical evidence for the fusion of social and commercial activities. Finally, this paper introduces a new perspective of e-commerce and calls more attention to this new phenomenon.",
"title": ""
},
{
"docid": "5585cc22a0af9cf00656ac04b14ade5a",
"text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.",
"title": ""
},
{
"docid": "bb7511f4137f487b2b8bf2f6f3f73a6a",
"text": "There is extensive evidence indicating that new neurons are generated in the dentate gyrus of the adult mammalian hippocampus, a region of the brain that is important for learning and memory. However, it is not known whether these new neurons become functional, as the methods used to study adult neurogenesis are limited to fixed tissue. We use here a retroviral vector expressing green fluorescent protein that only labels dividing cells, and that can be visualized in live hippocampal slices. We report that newly generated cells in the adult mouse hippocampus have neuronal morphology and can display passive membrane properties, action potentials and functional synaptic inputs similar to those found in mature dentate granule cells. Our findings demonstrate that newly generated cells mature into functional neurons in the adult mammalian brain.",
"title": ""
},
{
"docid": "a6959cc988542a077058e57a5d2c2eff",
"text": "A green and reliable method using supercritical fluid extraction (SFE) and molecular distillation (MD) was optimized for the separation and purification of standardized typical volatile components fraction (STVCF) from turmeric to solve the shortage of reference compounds in quality control (QC) of volatile components. A high quality essential oil with 76.0% typical components of turmeric was extracted by SFE. A sequential distillation strategy was performed by MD. The total recovery and purity of prepared STVCF were 97.3% and 90.3%, respectively. Additionally, a strategy, i.e., STVCF-based qualification and quantitative evaluation of major bioactive analytes by multiple calibrated components, was proposed to easily and effectively control the quality of turmeric. Compared with the individual calibration curve method, the STVCF-based quantification method was demonstrated to be credible and was effectively adapted for solving the shortage of reference volatile compounds and improving the QC of typical volatile components in turmeric, especially its functional products.",
"title": ""
},
{
"docid": "3412d99c29f7672fe3846173c9a4d734",
"text": "In the last decade, the ease of online payment has opened up many new opportunities for e-commerce, lowering the geographical boundaries for retail. While e-commerce is still gaining popularity, it is also the playground of fraudsters who try to misuse the transparency of online purchases and the transfer of credit card records. This paper proposes APATE, a novel approach to detect fraudulent credit card ∗NOTICE: this is the author’s version of a work that was accepted for publication in Decision Support Systems in May 8, 2015, published online as a self-archive copy after the 24 month embargo period. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Please cite this paper as follows: Van Vlasselaer, V., Bravo, C., Caelen, O., Eliassi-Rad, T., Akoglu, L., Snoeck, M., Baesens, B. (2015). APATE: A novel approach for automated credit card transaction fraud detection using network-based extensions. Decision Support Systems, 75, 38-48. Available Online: http://www.sciencedirect.com/science/article/pii/S0167923615000846",
"title": ""
},
{
"docid": "7fe99b63d2b3d94918e4b2f536053b1c",
"text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.",
"title": ""
},
{
"docid": "a5d0f584dd0be0d305b8e1247622bfb5",
"text": "In this paper, an all NMOS voltage-mode four-quadrant analog multiplier, based on a basic NMOS differential amplifier that can produce the output signal in voltage form without using resistors, is presented. The proposed circuit has been simulated with SPICE and achieved -3 dB bandwidth of 120 MHz. The power consumption is about 3.6 mW from a /spl plusmn/2.5 V power supply voltage, and the total harmonic distortion is 0.85% with a 1 V input signal.",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] | scidocsrr |
fbc003566a8bd0894b4ad368cdbae99c | Video Imagination from a Single Image with Transformation Generation | [
{
"docid": "85b4873732e297c5df6d7c999587aa6e",
"text": "We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.",
"title": ""
}
] | [
{
"docid": "8251aac995b17af8db2896adf820dc91",
"text": "This paper provides an overview of Data warehousing, Data Mining, OLAP, OLTP technologies, exploring the features, applications and the architecture of Data Warehousing. The data warehouse supports on-line analytical processing (OLAP), the functional and performance requirements of which are quite different from those of the on-line transaction processing (OLTP) applications traditionally supported by the operational databases. Data warehouses provide on-line analytical processing (OLAP) tools for the interactive analysis of multidimensional data of varied granularities, which facilitates effective data mining. Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. OLTP is customer-oriented and is used for transaction and query processing by clerks, clients and information technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, executives and analysts. Data warehousing and OLAP have emerged as leading technologies that facilitate data storage, organization and then, significant retrieval. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications.",
"title": ""
},
{
"docid": "075742c6c4017f03fa72ebae69b4d857",
"text": "This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants. The scheme and the related protocols can be used in networks for cloud service providers and enterprise data centers. This memo documents the deployed VXLAN protocol for the benefit of the Internet community.",
"title": ""
},
{
"docid": "0522e81651c7b5ba4996bcfc067ad85f",
"text": "This paper argues that current technology-driven implementations of Smart Cities, although being an important step in the right direction, fall short in exploiting the most important human dimension of cities. The paper argues therefore in support of the concept of Human Smart Cities. In a Human Smart City, people rather than technology are the true actors of the urban \"smartness\". The creation of a participatory innovation ecosystem in which citizens and communities interact with public authorities and knowledge developers is key. Such collaborative interaction leads to co-designed user centered innovation services and calls for new governance models. The urban transformation in which citizens are the main \"drivers of change\" through their empowerment and motivation ensures that the major city challenges can be addressed, including sustainable behavior transformations. Furthermore, the authors argue that the city challenges can be more effectively addressed at the scale of neighborhood and they provide examples and experiences that demonstrate the viability, importance and impact of such approach. The paper builds on the experience of implementing Human Smart Cities projects in 27 European cities located in 17 different countries. Details of the technologies, methodologies, tools and policies are illustrated with examples extracted from the project My Neighbourhood.",
"title": ""
},
{
"docid": "13313b27f7ead27611d5957394e79a69",
"text": "Personality profiling is the task of detecting personality traits of authors based on writing style. Several personality typologies exist, however, the Myers-Briggs Type Indicator (MBTI) is particularly popular in the non-scientific community, and many people use it to analyse their own personality and talk about the results online. Therefore, large amounts of self-assessed data on MBTI are readily available on social-media platforms such as Twitter. We present a novel corpus of tweets annotated with the MBTI personality type and gender of their author for six Western European languages (Dutch, German, French, Italian, Portuguese and Spanish). We outline the corpus creation and annotation, show statistics of the obtained data distributions and present first baselines on Myers-Briggs personality profiling and gender prediction for all six languages.",
"title": ""
},
{
"docid": "a2accb08e0f41f7d8b5b2ca6781549cd",
"text": "Malaria remains the leading communicable disease in Ethiopia, with around one million clinical cases of malaria reported annually. The country currently has plans for elimination for specific geographic areas of the country. Human movement may lead to the maintenance of reservoirs of infection, complicating attempts to eliminate malaria. An unmatched case–control study was conducted with 560 adult patients at a Health Centre in central Ethiopia. Patients who received a malaria test were interviewed regarding their recent travel histories. Bivariate and multivariate analyses were conducted to determine if reported travel outside of the home village within the last month was related to malaria infection status. After adjusting for several known confounding factors, travel away from the home village in the last 30 days was a statistically significant risk factor for infection with Plasmodium falciparum (AOR 1.76; p=0.03) but not for infection with Plasmodium vivax (AOR 1.17; p=0.62). Male sex was strongly associated with any malaria infection (AOR 2.00; p=0.001). Given the importance of identifying reservoir infections, consideration of human movement patterns should factor into decisions regarding elimination and disease prevention, especially when targeted areas are limited to regions within a country.",
"title": ""
},
{
"docid": "006347cd3839d9fabd983e7cc379322d",
"text": "Recent progress in both Artificial Intelligence (AI) and Robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially Human-Robot Interaction (HRI) for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (i) execute action sequences to complete user requests, (ii) efficiently ask questions to resolve user requests, (iii) understand human commands given in natural language, and (iv) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform.",
"title": ""
},
{
"docid": "8c2b0e93eae23235335deacade9660f0",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "223a668b19281cb079a51ee128602de4",
"text": "Driving a vehicle is a task affected by an increasing number and a rising complexity of Driver Assistance Systems (DAS) resulting in a raised cognitive load of the driver, and in consequence to the distraction from the main activity of driving. A number of potential solutions have been proposed so far, however, although these techniques broaden the perception horizon (e. g. the introduction of the sense of touch as additional information modality or the utilization of multimodal instead of unimodal interfaces), they demand the attention of the driver too. In order to cope with the issues of workload and/or distraction, it would be essential to find a non-distracting and noninvasive solution for the emergence of information.\n In this work we have investigated the application of heart rate variability (HRV) analysis to electrocardiography (ECG) data for identifying driving situations of possible threat by monitoring and recording the autonomic arousal states of the driver. For verification we have collected ECG and global positioning system (GPS) data in more than 20 test journeys on two regularly driven routes during a period of two weeks.\n The first results have shown that an indicated difference of the arousal state of the driver for a dedicated point on a route, compared to its usual state, can be interpreted as a warning sign and used to notify the driver about this, perhaps safety critical, change. To provide evidence for this hypothesis it would be essential in the next step to conduct a large number of journeys on different times of the day, using different drivers and various roadways.",
"title": ""
},
{
"docid": "63ca8787121e3b392e130f9d451b11ea",
"text": "Frank K.Y. Chan Hong Kong University of Science and Technology",
"title": ""
},
{
"docid": "b6edc7b4bb6c8d66d237ad36cdabc908",
"text": "Especially for microcontroller and mobile applications, embedded nonvolatile memory is an important technology offering to reduce power and provide local persistent storage. This article describes a new resistive RAM device with fast write operation to improve the speed of embedded nonvolatile memories.",
"title": ""
},
{
"docid": "0349bef88d7dd5ca012fd4d2fd28cf0d",
"text": "Impedance-source converters, an emerging technology in electric energy conversion, overcome limitations of conventional solutions by the use of specific impedance-source networks. Focus of this paper is on the topologies of galvanically isolated impedance-source dc-dc converters. These converters are particularly appropriate for distributed generation systems with renewable or alternative energy sources, which require input voltage and load regulation in a wide range. We review here the basic topologies for researchers and engineers, and classify all the topologies of the impedance-source galvanically isolated dc-dc converters according to the element that transfers energy from the input to the output: a transformer, a coupled inductor, or their combination. This classification reveals advantages and disadvantages, as well as a wide space for further research. This paper also outlines the most promising research directions in this field.",
"title": ""
},
{
"docid": "f9cea5092a55c2c0578a1ad3f078078c",
"text": "To achieve a compact and lightweight surgical robot with force-sensing capability, in this paper, we propose a surgical robot called “S-surge,” which is developed for robot-assisted minimally invasive surgery, focusing mainly on its mechanical design and force-sensing system. The robot consists of a 4-degree-of-freedom (DOF) surgical instrument and a 3-DOF remote center-of-motion manipulator. The manipulator is designed by adopting a double-parallelogram mechanism and spherical parallel mechanism to provide advantages such as compactness, simplicity, improved accuracy, and high stiffness. Kinematic analysis was performed in order to optimize workspace. The surgical instrument enables multiaxis force sensing including a three-axis pulling force and single-axis grasping force. In this study, it will be verified that it is feasible to carry the entire robot around thanks to its light weight (4.7 kg); therefore, allowing the robot to be applicable for telesurgery in remote areas. Finally, it will be explained how we experimented with the performance of the robot and conducted tissue manipulating task using the motion and force sensing capability of the robot in a simulated surgical setting.",
"title": ""
},
{
"docid": "ad9b28c4f7b0d7e60296f20d54786559",
"text": "An exact algorithm to compute an optimal 3D oriented bounding box was published in 1985 by Joseph O'Rourke, but it is slow and extremely hard to implement. In this article we propose a new approach, where the computation of the minimal-volume OBB is formulated as an unconstrained optimization problem on the rotation group SO(3,ℝ). It is solved using a hybrid method combining the genetic and Nelder-Mead algorithms. This method is analyzed and then compared to the current state-of-the-art techniques. It is shown to be either faster or more reliable for any accuracy.",
"title": ""
},
{
"docid": "8a478da1c2091525762db35f1ac7af58",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "2455e5f4d1ca0d7ad7e93803bc5c81f7",
"text": "Certain questions about memory address a relatively global, structural level of analysis. Is there one kind of memory or many? What brain structures or systems are involved in memory and what jobs do they do? One useful approach to such questions has focused on studies of neurological patients with memory impair-merit and parallel studies with animal models. Memory impairment sometimes occurs as a circum-scribed disorder in the absence of other intellectual deficits 1-7. In such cases, the memory impairment occurs in the context of normal scores on conventional intelligence tests, normal immediate (digit span) memory, and intact memory for very remote events. The analysis of memory impairment can provide useful information about the organization of memory and about the function of the damaged neural structures. Clinically significant memory impairment, i.e. amnesia, can occur for a variety of reasons and is typically associated with bilateral damage to the medial temporal lobe or the diencephalic midline. The severity and purity of the amnesia can vary greatly depending on the extent and pattern of damage. Standard quantitative tests are available for the assessment of memory and other cognitive functions, so that the findings from different groups of study patients can be compared 8-1°. The deficit in amnesia is readily detectable in tests of paired-associate learning and delayed recall. Indeed, amnesic patients are deficient in most tests of new learning, especially when they try to acquire an amount of information that exceeds what can be kept in mind through active rehearsal or when they try to retain information across a delay. This deficit occurs regardless of the sensory modality in which information is presented and regardless whether memory is tested by recall or recognition techniques. Moreover, the memory impairment is not limited to artificial laboratory situations, where patients are instructed explicitly to learn material that occurs in a particular episode and then are later instructed explicitly to recall the material. For example, patients can be provided items of general information with no special instruction to learn (e.g. Angel Falls is located in Venezuela); and later they can simply be asked factual questions without any reference to a recent learning episode (e.g. Where is Angel Falls located?). In this case, amnesic patients are impaired both in tests of free recall as well as in tests of recognition memory, in which the correct answer is selected from among several alternatives 11. These aspects of amnesia show …",
"title": ""
},
{
"docid": "5dad217551cbbb7ba8476467c3469c6d",
"text": "This letter presents a semi-automatic approach to delineating road networks from very high resolution satellite images. The proposed method consists of three main steps. First, the geodesic method is used to extract the initial road segments that link the road seed points prescribed in advance by users. Next, a road probability map is produced based on these coarse road segments and a further direct thresholding operation separates the image into two classes of surfaces: the road and nonroad classes. Using the road class image, a kernel density estimation map is generated, upon which the geodesic method is used once again to link the foregoing road seed points. Experiments demonstrate that this proposed method can extract smooth correct road centerlines.",
"title": ""
},
{
"docid": "6d2efd95c2b3486bec5b4c2ab2db18ad",
"text": "The goal of this work is to replace objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene using the approach from Gupta et al. [13]. We use a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel normals in images containing rendered synthetic objects. When tested on real data, it outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place the model that fits the best into the scene. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [33], while being an order of magnitude faster at the same time.",
"title": ""
},
{
"docid": "8d8f0268ffaf1254f236c5464ab2bdf6",
"text": "A primary design decision in HTTP/2, the successor of HTTP/1.1, is object multiplexing. While multiplexing improves web performance in many scenarios, it still has several drawbacks due to complex cross-layer interactions. In this paper, we propose a novel multiplexing architecture called TM that overcomes many of these limitations. TM strategically leverages multiple concurrent multiplexing pipes in a transparent manner, and eliminates various types of head-of-line blocking that can severely impact user experience. TM works beyond HTTP over TCP and applies to a wide range of application and transport protocols. Extensive evaluations on LTE and wired networks show that TM substantially improves performance e.g., reduces web page load time by an average of 24% compared to SPDY, which is the basis for HTTP/2. For lossy links and concurrent transfers, the improvements are more pronounced: compared to SPDY, TM achieves up to 42% of average PLT reduction under losses and up to 90% if concurrent transfers exist.",
"title": ""
},
{
"docid": "5a0fe40414f7881cc262800a43dfe4d0",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
},
{
"docid": "f89cebba789e46a1238f3174830c6292",
"text": "A hand injury can greatly affect a person's daily life. Physicians must evaluate the state of recovery of a patient's injured hand. However, current manual evaluations of hand functions are imprecise and inconvenient. In this paper, a data glove embedded with 9-axis inertial sensors and force sensitive resistors is proposed. The proposed data glove system enables hand movement to be tracked in real-time. In addition, the system can be used to obtain useful parameters for physicians, is an efficient tool for evaluating the hand function of patients, and can improve the quality of hand rehabilitation.",
"title": ""
}
] | scidocsrr |
59847000e175024b7b600b79e60d9de5 | Circumferential Traveling Wave Slot Array on Cylindrical Substrate Integrated Waveguide (CSIW) | [
{
"docid": "24151cf5d4481ba03e6ffd1ca29f3441",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "97a8c2ba66f6fdb917d25729a1874d92",
"text": "Transverse slot array antennas fed by a half-mode substrate integrated waveguide (HMSIW) are proposed and developed in this paper. The design concept of these new radiating structures is based on the study of the field distribution and phase constant along the HMSIW as well as on the resonant characteristics of a single slot etched on its top conducting wall. Two types of HMSIW-fed slot array antennas, operating, respectively, in X-band and Ka-band, are designed following a procedure similar to the design of slot array antennas fed by a dielectric-filled rectangular waveguide. Compared with slot array antennas fed by a conventional rectangular waveguide, such proposed HMSIW-fed slot array antennas possess the advantages of low profile, compact size, low cost, and easy integration with other microwave and millimeter wave planar circuits. It is worth noting that the width of HMSIW slot array antennas is reduced by nearly half compared to that of slot array antennas fed by a substrate integrated waveguide.",
"title": ""
},
{
"docid": "29c6cba747a2ad280d2185bfcd5866e2",
"text": "A millimeter-wave shaped-beam substrate integrated conformal array antenna is demonstrated in this paper. After discussing the influence of conformal shape on the characteristics of a substrate integrated waveguide (SIW) and a radiating slot, an array mounted on a cylindrical surface with a radius of 20 mm, i.e., 2.3 λ, is synthesized at the center frequency of 35 GHz. All components, including a 1-to-8 divider, a phase compensated network and an 8 × 8 slot array are fabricated in a single dielectric substrate together. In measurement, it has a - 27.4 dB sidelobe level (SLL) beam in H-plane and a flat-topped fan beam with -38° ~ 37° 3 dB beamwidth in E-plane at the center frequency of 35 GHz. The cross polarization is lower than -41.7 dB at the beam direction. Experimental results agree well with simulations, thus validating our design. This SIW scheme is able to solve the difficulty of integration between conformal array elements and a feed network in millimeter-wave frequency band, while avoid radiation leakage and element-to-element parasitic cross-coupling from the feed network.",
"title": ""
},
{
"docid": "9b0c0001e3bf9d3618928bbfcad07ae9",
"text": "A Ka-band compact single layer substrate integrated waveguide monopulse slot array antenna for the application of monopulse tracking system is designed, fabricated and measured. The feeding network as well as the monopulse comparator and the subarrays is integrated on the same dielectric with the size of 140 mmtimes130 mm. The bandwidth ( S11 < -10 dB) of the antenna is 7.39% with an operating frequency range of 30.80 GHz-33.14 GHz. The maximum gain at 31.5 GHz is 18.74 dB and the maximum null depth is -46.3 dB. The sum- and difference patterns of three planes: H-plane, E-plane and diagonal plane are measured and presented.",
"title": ""
},
{
"docid": "a7ca3ffcae09ad267281eb494532dc54",
"text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.",
"title": ""
}
] | [
{
"docid": "c6d3f20e9d535faab83fb34cec0fdb5b",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "d001d61e90dd38eb0eab0c8d4af9d2a6",
"text": "Wireless LANs, especially WiFi, have been pervasively deployed and have fostered myriad wireless communication services and ubiquitous computing applications. A primary concern in designing each scenario-tailored application is to combat harsh indoor propagation environments, particularly Non-Line-Of-Sight (NLOS) propagation. The ability to distinguish Line-Of-Sight (LOS) path from NLOS paths acts as a key enabler for adaptive communication, cognitive radios, robust localization, etc. Enabling such capability on commodity WiFi infrastructure, however, is prohibitive due to the coarse multipath resolution with mere MAC layer RSSI. In this work, we dive into the PHY layer and strive to eliminate irrelevant noise and NLOS paths with long delays from the multipath channel responses. To further break away from the intrinsic bandwidth limit of WiFi, we extend to the spatial domain and harness natural mobility to magnify the randomness of NLOS paths while retaining the deterministic nature of the LOS component. We prototype LiFi, a statistical LOS identification scheme for commodity WiFi infrastructure and evaluate it in typical indoor environments covering an area of 1500 m2. Experimental results demonstrate an overall LOS identification rate of 90.4% with a false alarm rate of 9.3%.",
"title": ""
},
{
"docid": "8fa0c59e04193ff1375b3ed544847229",
"text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "c4fe9fd7e506e18f1a38bc71b7434b99",
"text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.",
"title": ""
},
{
"docid": "4f1949af3455bd5741e731a9a60ecdf1",
"text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.",
"title": ""
},
{
"docid": "3e2c79715d8ae80e952d1aabf03db540",
"text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].",
"title": ""
},
{
"docid": "fc3d4b4ac0d13b34aeadf5806013689d",
"text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.",
"title": ""
},
{
"docid": "468306f51c998bfe6792df6acfd784f2",
"text": "We propose a novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. At the same time, our method also learns FCNs for encoding the spatial transformations at the same spatial resolution of images to be registered, rather than learning coarse-grained spatial transformation information. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different resolutions with deep selfsupervision through typical feedforward and backpropagation computation. Since our method simultaneously optimizes and learns spatial transformations for the image registration, our method can be directly used to register a pair of images, and the registration of a set of images is also a training procedure for FCNs so that the trained FCNs can be directly adopted to register new images by feedforward computation of the learned FCNs without any optimization. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.",
"title": ""
},
{
"docid": "7121d534b758bab829e1db31d0ce2e43",
"text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.",
"title": ""
},
{
"docid": "7ca7bca5a704681e8b8c7d213c6ad990",
"text": "Three experiments in naming Chinese characters are presented here to address the relationships between character frequency, consistency, and regularity effects in Chinese character naming. Significant interactions between character consistency and frequency were found across the three experiments, regardless of whether the phonetic radical of the phonogram is a legitimate character in its own right or not. These findings suggest that the phonological information embedded in Chinese characters has an influence upon the naming process of Chinese characters. Furthermore, phonetic radicals exist as computation units mainly because they are structures occurring systematically within Chinese characters, not because they can function as recognized, freestanding characters. On the other hand, the significant interaction between regularity and consistency found in the first experiment suggests that these two factors affect Chinese character naming in different ways. These findings are accounted for within interactive activation frameworks and a connectionist model.",
"title": ""
},
{
"docid": "4b6da0b9c88f4d94abfbbcb08bb0fc43",
"text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.",
"title": ""
},
{
"docid": "6f989e22917aa2f99749701c8509fcca",
"text": "The reflection of an object can be distorted by undulations of the reflector, be it a funhouse mirror or a fluid surface. Painters and photographers have long exploited this effect, for example, in imaging scenery distorted by ripples on a lake. Here, we use this phenomenon to visualize micrometric surface waves generated as a millimetric droplet bounces on the surface of a vibrating fluid bath (Bush 2015b). This system, discovered a decade ago (Couder et al. 2005), is of current interest as a hydrodynamic quantum analog; specifically, the walking droplets exhibit several features reminiscent of quantum particles (Bush 2015a).",
"title": ""
},
{
"docid": "4ac88aa31bff5b4942dd062d42879d27",
"text": "In this paper we demonstrate the potential of data analytics methods for location-based services. We develop a support system that enables user-based relocation of vehicles in free-floating carsharing models. In these businesses, customers can rent and leave cars anywhere within a predefined operational area. However, due to this flexibility, freefloating carsharing is prone to supply and demand imbalance. The support system detects imbalances by analyzing patterns in vehicle idle times. Alternative rental destinations are proposed to customers in exchange for a discount. Using data on 250,000 rentals in the city of Vancouver, we evaluate the relocation system through a simulation. The results show that our approach decreases the average vehicle idle time by up to 16 percent, suggesting a more balanced state of supply and demand. Employing the system results in a higher degree of vehicle utilization and leads to a substantial increase of profits for providers.",
"title": ""
},
{
"docid": "9544b2cc301e2e3f170f050de659dda4",
"text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.",
"title": ""
},
{
"docid": "f1c210ee9f70db482d134bf544984f77",
"text": "Character segmentation plays an important role in the Arabic optical character recognition (OCR) system, because the letters incorrectly segmented perform to unrecognized character. Accuracy of character recognition depends mainly on the segmentation algorithm used. The domain of off-line handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different segmentation algorithms for off-line Arabic handwriting recognition have been proposed and applied to various types of word images. This paper provides modify segmentation algorithm based on bounding box to improve segmentation accuracy using two main stages: preprocessing stage and segmentation stage. In preprocessing stage, used a set of methods such as noise removal, binarization, skew correction, thinning and slant correction, which retains shape of the character. In segmentation stage, the modify bounding box algorithm is done. In this algorithm a distance analysis use on bounding boxes of two connected components (CCs): main (CCs), auxiliary (CCs). The modified algorithm is presented and taking place according to three cases. Cut points also determined using structural features for segmentation character. The modified bounding box algorithm has been successfully tested on 450 word images of Arabic handwritten words. The results were very promising, indicating the efficiency of the suggested",
"title": ""
},
{
"docid": "42ca37dd78bf8b52da5739ad442f203f",
"text": "Frame interpolation attempts to synthesise intermediate frames given one or more consecutive video frames. In recent years, deep learning approaches, and in particular convolutional neural networks, have succeeded at tackling lowand high-level computer vision problems including frame interpolation. There are two main pursuits in this line of research, namely algorithm efficiency and reconstruction quality. In this paper, we present a multi-scale generative adversarial network for frame interpolation (FIGAN). To maximise the efficiency of our network, we propose a novel multi-scale residual estimation module where the predicted flow and synthesised frame are constructed in a coarse-tofine fashion. To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses. We evaluate the proposed approach using a collection of 60fps videos from YouTube-8m. Our results improve the state-of-the-art accuracy and efficiency, and a subjective visual quality comparable to the best performing interpolation method.",
"title": ""
},
{
"docid": "2f83ca2bdd8401334877ae4406a4491c",
"text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.",
"title": ""
},
{
"docid": "0edc89fbf770bbab2fb4d882a589c161",
"text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.",
"title": ""
},
{
"docid": "548e1962ac4a2ea36bf90db116c4ff49",
"text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.",
"title": ""
},
{
"docid": "f391c56dd581d965548062944200e95f",
"text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.",
"title": ""
}
] | scidocsrr |
dec71c0883a732e0779d0029fe742db3 | Performance metrics in supply chain management | [
{
"docid": "0580342f7efb379fc417d2e5e48c4b73",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] | [
{
"docid": "9091df6080e8cd531bd6a883810d7445",
"text": "Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis.",
"title": ""
},
{
"docid": "51d950dfb9f71b9c8948198c147b9884",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
},
{
"docid": "2274f3d3dc25bec4b86988615d421f10",
"text": "Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis.",
"title": ""
},
{
"docid": "f4b4c484543cd653d2acbd2e9839d5f4",
"text": "This article offers a succinct overview of the hypothesis that the evolution of cognition could benefit from a close examination of brain changes reflected in the shape of the neurocranium. I provide both neurological and genetic evidence in support of this hypothesis, and conclude that the study of language evolution need not be regarded as a mystery.",
"title": ""
},
{
"docid": "fd1e327327068a1373e35270ef257c59",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "f9143c2bb6c8271efa516ca54c9baef7",
"text": "In recent years several measures for the gold standard based evaluation of ontology learning were proposed. They can be distinguished by the layers of an ontology (e.g. lexical term layer and concept hierarchy) they evaluate. Judging those measures with a list of criteria we show that there exist some measures sufficient for evaluating the lexical term layer. However, existing measures for the evaluation of concept hierarchies fail to meet basic criteria. This paper presents a new taxonomic measure which overcomes the problems of current approaches.",
"title": ""
},
{
"docid": "15ce175cc7aa263ded19c0ef344d9a61",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "a86114aeee4c0bc1d6c9a761b50217d4",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
},
{
"docid": "90564374d0c72816f930bc629f97d277",
"text": "Outlier detection is an integral component of statistical modelling and estimation. For highdimensional data, classical methods based on the Mahalanobis distance are usually not applicable. We propose an outlier detection procedure that replaces the classical minimum covariance determinant estimator with a high-breakdown minimum diagonal product estimator. The cut-off value is obtained from the asymptotic distribution of the distance, which enables us to control the Type I error and deliver robust outlier detection. Simulation studies show that the proposed method behaves well for high-dimensional data.",
"title": ""
},
{
"docid": "491ddda3cf5acf013b99cdb477acfc9e",
"text": "As we outsource more of our decisions and activities to machines with various degrees of autonomy, the question of clarifying the moral and legal status of their autonomous behaviour arises. There is also an ongoing discussion on whether artificial agents can ever be liable for their actions or become moral agents. Both in law and ethics, the concept of liability is tightly connected with the concept of ability. But as we work to develop moral machines, we also push the boundaries of existing categories of ethical competency and autonomy. This makes the question of responsibility particularly difficult. Although new classification schemes for ethical behaviour and autonomy have been discussed, these need to be worked out in far more detail. Here we address some issues with existing proposals, highlighting especially the link between ethical competency and autonomy, and the problem of anchoring classifications in an operational understanding of what we mean by a moral",
"title": ""
},
{
"docid": "2575bad473ef55281db460617e0a37c8",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "d9aac3e00316f9970d04eb5c46d16b4c",
"text": "Cannabis (Cannabis sativa, or hemp) and its constituents-in particular the cannabinoids-have been the focus of extensive chemical and biological research for almost half a century since the discovery of the chemical structure of its major active constituent, Δ9-tetrahydrocannabinol (Δ9-THC). The plant's behavioral and psychotropic effects are attributed to its content of this class of compounds, the cannabinoids, primarily Δ9-THC, which is produced mainly in the leaves and flower buds of the plant. Besides Δ9-THC, there are also non-psychoactive cannabinoids with several medicinal functions, such as cannabidiol (CBD), cannabichromene (CBC), and cannabigerol (CBG), along with other non-cannabinoid constituents belonging to diverse classes of natural products. Today, more than 560 constituents have been identified in cannabis. The recent discoveries of the medicinal properties of cannabis and the cannabinoids in addition to their potential applications in the treatment of a number of serious illnesses, such as glaucoma, depression, neuralgia, multiple sclerosis, Alzheimer's, and alleviation of symptoms of HIV/AIDS and cancer, have given momentum to the quest for further understanding the chemistry, biology, and medicinal properties of this plant.This contribution presents an overview of the botany, cultivation aspects, and the phytochemistry of cannabis and its chemical constituents. Particular emphasis is placed on the newly-identified/isolated compounds. In addition, techniques for isolation of cannabis constituents and analytical methods used for qualitative and quantitative analysis of cannabis and its products are also reviewed.",
"title": ""
},
{
"docid": "f45231d78fb8a88cd70b4960a6d375f9",
"text": "In this article the design and the construction of an ultrawideband (UWB) 3 dB hybrid coupler are presented. The coupler is realized in broadside stripline technology to cover the operating bandwidth 0.5 - 18 GHz (more than five octaves). Detailed electromagnetic design has been carried to optimize performances according to bandwidth. The comparison between simulations and measurements validated the design approach. The first prototype guaranteed an insertion loss lower than 5 dB and a phase shift equal to 90° +/- 5° in bandwidth",
"title": ""
},
{
"docid": "a2f15d76368aa2b9c3e34eef5b6d925f",
"text": "OBJECTIVES\nTo review the sonographic features of spinal anomalies in first-trimester fetuses presenting for screening for chromosomal abnormalities.\n\n\nMETHODS\nFetuses with a spinal abnormality diagnosed prenatally or postnatally that underwent first-trimester sonographic evaluation at our institution had their clinical information retrieved and their sonograms reviewed.\n\n\nRESULTS\nA total of 21 fetuses complied with the entry criteria including eight with body stalk anomaly, seven with spina bifida, two with Vertebral, Anal, Cardiac, Tracheal, Esophageal, Renal, and Limb (VACTERL) association, and one case each of isolated kyphoscoliosis, tethered cord, iniencephaly, and sacrococcygeal teratoma. One fetus with body stalk anomaly and another with VACTERL association also had a myelomeningocele, making a total of nine cases of spina bifida in our series. Five of the nine (56%) cases with spina bifida, one of the two cases with VACTERL association, and the cases with tethered cord and sacrococcygeal teratoma were undiagnosed in the first trimester. Although increased nuchal translucency was found in seven (33%) cases, chromosomal analysis revealed only one case of aneuploidy in this series.\n\n\nCONCLUSIONS\nFetal spinal abnormalities diagnosed in the first trimester are usually severe and frequently associated with other major defects. The diagnosis of small defects is difficult and a second-trimester scan is still necessary to detect most cases of spina bifida.",
"title": ""
},
{
"docid": "6c784fc34cf7a8e700c67235e05d8cb0",
"text": "Fully automatic methods that extract lists of objects from the Web have been studied extensively. Record extraction, the first step of this object extraction process, identifies a set of Web page segments, each of which represents an individual object (e.g., a product). State-of-the-art methods suffice for simple search, but they often fail to handle more complicated or noisy Web page structures due to a key limitation -- their greedy manner of identifying a list of records through pairwise comparison (i.e., similarity match) of consecutive segments. This paper introduces a new method for record extraction that captures a list of objects in a more robust way based on a holistic analysis of a Web page. The method focuses on how a distinct tag path appears repeatedly in the DOM tree of the Web document. Instead of comparing a pair of individual segments, it compares a pair of tag path occurrence patterns (called visual signals) to estimate how likely these two tag paths represent the same list of objects. The paper introduces a similarity measure that captures how closely the visual signals appear and interleave. Clustering of tag paths is then performed based on this similarity measure, and sets of tag paths that form the structure of data records are extracted. Experiments show that this method achieves higher accuracy than previous methods.",
"title": ""
},
{
"docid": "89d736c68d2befba66a0b7d876e52502",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "5c31ed81a9c8d6463ce93890e38ad7b5",
"text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.",
"title": ""
},
{
"docid": "057a521ce1b852591a44417e788e4541",
"text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.",
"title": ""
},
{
"docid": "ca4743f1f1be194f005fabffbe0b15da",
"text": "The ubiquitous webcam indicator LED is an important privacy feature which provides a visual cue that the camera is turned on. We describe how to disable the LED on a class of Apple internal iSight webcams used in some versions of MacBook laptops and iMac desktops. This enables video to be captured without any visual indication to the user and can be accomplished entirely in user space by an unprivileged (non-root) application. The same technique that allows us to disable the LED, namely reprogramming the firmware that runs on the iSight, enables a virtual machine escape whereby malware running inside a virtual machine reprograms the camera to act as a USB Human Interface Device (HID) keyboard which executes code in the host operating system. We build two proofs-of-concept: (1) an OS X application, iSeeYou, which demonstrates capturing video with the LED disabled; and (2) a virtual machine escape that launches Terminal.app and runs shell commands. To defend against these and related threats, we build an OS X kernel extension, iSightDefender, which prohibits the modification of the iSight’s firmware from user space.",
"title": ""
},
{
"docid": "7fece61e99d0b461b04bcf0dfa81639d",
"text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.",
"title": ""
}
] | scidocsrr |
e182ef6081b4711ffab5d0ec4d8fa340 | Knowledge management in software engineering - describing the process | [
{
"docid": "a2047969c4924a1e93b805b4f7d2402c",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
}
] | [
{
"docid": "94c6f94e805a366c6fa6f995f13a92ba",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "27a3c368176ead25ed653d696648f244",
"text": "The growing proliferation in solar deployment, especially at distribution level, has made the case for power system operators to develop more accurate solar forecasting models. This paper proposes a solar photovoltaic (PV) generation forecasting model based on multi-level solar measurements and utilizing a nonlinear autoregressive with exogenous input (NARX) model to improve the training and achieve better forecasts. The proposed model consists of four stages of data preparation, establishment of fitting model, model training, and forecasting. The model is tested under different weather conditions. Numerical simulations exhibit the acceptable performance of the model when compared to forecasting results obtained from two-level and single-level studies.",
"title": ""
},
{
"docid": "4a811a48f913e1529f70937c771d01da",
"text": "An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.",
"title": ""
},
{
"docid": "7bef5a19f6d8f71d4aa12194dd02d0c4",
"text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.",
"title": ""
},
{
"docid": "4b0230c640cc85a0f1f23c0cb60d5325",
"text": "Natural language understanding research has recently shifted towards complex Machine Learning and Deep Learning algorithms. Such models often outperform significantly their simpler counterparts. However, their performance relies on the availability of large amounts of labeled data, which are rarely available. To tackle this problem, we propose a methodology for extending training datasets to arbitrarily big sizes and training complex, data-hungry models using weak supervision. We apply this methodology on biomedical relationship extraction, a task where training datasets are excessively time-consuming and expensive to create, yet has a major impact on downstream applications such as drug discovery. We demonstrate in a small-scale controlled experiment that our method consistently enhances the performance of an LSTM network, with performance improvements comparable to hand-labeled training data. Finally, we discuss the optimal setting for applying weak supervision using this methodology.",
"title": ""
},
{
"docid": "1b20c242815b26533731308cb42ac054",
"text": "Amnesic patients demonstrate by their performance on a serial reaction time task that they learned a repeating spatial sequence despite their lack of awareness of the repetition (Nissen & Bullemer, 1987). In the experiments reported here, we investigated this form of procedural learning in normal subjects. A subgroup of subjects showed substantial procedural learning of the sequence in the absence of explicit declarative knowledge of it. Their ability to generate the sequence was effectively at chance and showed no savings in learning. Additional amounts of training increased both procedural and declarative knowledge of the sequence. Development of knowledge in one system seems not to depend on knowledge in the other. Procedural learning in this situation is neither solely perceptual nor solely motor. The learning shows minimal transfer to a situation employing the same motor sequence.",
"title": ""
},
{
"docid": "c0d8842983a2d7952de1c187a80479ac",
"text": "Two new topologies of three-phase segmented rotor switched reluctance machine (SRM) that enables the use of standard voltage source inverters (VSIs) for its operation are presented. The topologies has shorter end-turn length, axial length compared to SRM topologies that use three-phase inverters; compared to the conventional SRM (CSRM), these new topologies has the advantage of shorter flux paths that results in lower core losses. FEA based optimization have been performed for a given design specification. The new concentrated winding segmented SRMs demonstrate competitive performance with three-phase standard inverters compared to CSRM.",
"title": ""
},
{
"docid": "ac040c0c04351ea6487ea6663688ebd6",
"text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.",
"title": ""
},
{
"docid": "fadbfcc98ad512dd788f6309d0a932af",
"text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.",
"title": ""
},
{
"docid": "3854ead43024ebc6ac942369a7381d71",
"text": "During the past two decades, the prevalence of obesity in children has risen greatly worldwide. Obesity in childhood causes a wide range of serious complications, and increases the risk of premature illness and death later in life, raising public-health concerns. Results of research have provided new insights into the physiological basis of bodyweight regulation. However, treatment for childhood obesity remains largely ineffective. In view of its rapid development in genetically stable populations, the childhood obesity epidemic can be primarily attributed to adverse environmental factors for which straightforward, if politically difficult, solutions exist.",
"title": ""
},
{
"docid": "9b1bf9930b378232d03c43c007d1c151",
"text": "Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.",
"title": ""
},
{
"docid": "212e9306654141360a7d240a30af5c4a",
"text": "In this paper, we introduce a stereo vision based CNN tracker for a person following robot. The tracker is able to track a person in real-time using an online convolutional neural network. Our approach enables the robot to follow a target under challenging situations such as occlusions, appearance changes, pose changes, crouching, illumination changes or people wearing the same clothes in different environments. The robot follows the target around corners even when it is momentarily unseen by estimating and replicating the local path of the target. We build an extensive dataset for person following robots under challenging situations. We evaluate the proposed system quantitatively by comparing our tracking approach with existing real-time tracking algorithms.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "ec6fb21b7ae27cc4df67f3d6745ffe34",
"text": "In today's world data is growing very rapidly, which we call as big data. To deal with these large data sets, currently we are using NoSQL databases, as relational database is not capable for handling such data. These schema less NoSQL database allow us to handle unstructured data. Through this paper we are comparing two NoSQL databases MongoDB and CouchBase server, in terms of image storage and retrieval. Aim behind selecting these two databases as both comes under Document store category. Major applications like social media, traffic analysis, criminal database etc. require image database. The motivation behind this paper is to compare database performance in terms of time required to store and retrieve images from database. In this paper, firstly we are going describe advantages of NoSQL databases over SQL, then brief idea about MongoDB and CouchBase and finally comparison of time required to insert various size images in databases and to retrieve various size images using front end tool Java.",
"title": ""
},
{
"docid": "1d53b01ee1a721895a17b7d0f3535a28",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
},
{
"docid": "aeb3e0b089e658b532b3ed6c626898dd",
"text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.",
"title": ""
},
{
"docid": "72a5db33e2ba44880b3801987b399c3d",
"text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2c4fed71ee9d658516b017a924ad6589",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
},
{
"docid": "77e5724ff3b8984a1296731848396701",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: [email protected] Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: [email protected] G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
}
] | scidocsrr |
9dcef20242cd852b9f363fd031d641ec | Interactive Instance-based Evaluation of Knowledge Base Question Answering | [
{
"docid": "1fd9db81e41fc3b9a76a52cc9a0618c1",
"text": "Semantic parsing is a rich fusion of the logical and the statistical worlds.",
"title": ""
},
{
"docid": "9b288ed3a6079bee5ed3154b1aab296e",
"text": "We introduce ParlAI (pronounced “parlay”), an open-source software platform for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others’ models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.",
"title": ""
}
] | [
{
"docid": "b7ae9cae900253f270d43c4b34e68c57",
"text": "In this paper, a complete voiceprint recognition based on Matlab was realized, including speech processing and feature extraction at early stage, and model training and recognition at later stage. For speech processing and feature extraction at early stage, Mel Frequency Cepstrum Coefficient (MFCC) was taken as feature parameter. For speaker model method, DTW model was adopted to reflect the voiceprint characteristics of speech, converting voiceprint recognition into speaker speech data evaluation, and breaking up complex speech training and matching into model parameter training and probability calculation. Simulation experiment results show that this system is effective to recognize voiceprint.",
"title": ""
},
{
"docid": "66610cf27a67760f6625e2fe4bbc7783",
"text": "UNLABELLED\nYale Image Finder (YIF) is a publicly accessible search engine featuring a new way of retrieving biomedical images and associated papers based on the text carried inside the images. Image queries can also be issued against the image caption, as well as words in the associated paper abstract and title. A typical search scenario using YIF is as follows: a user provides few search keywords and the most relevant images are returned and presented in the form of thumbnails. Users can click on the image of interest to retrieve the high resolution image. In addition, the search engine will provide two types of related images: those that appear in the same paper, and those from other papers with similar image content. Retrieved images link back to their source papers, allowing users to find related papers starting with an image of interest. Currently, YIF has indexed over 140 000 images from over 34 000 open access biomedical journal papers.\n\n\nAVAILABILITY\nhttp://krauthammerlab.med.yale.edu/imagefinder/",
"title": ""
},
{
"docid": "40fcf74d2f15757ac3c9b401c05a4fb9",
"text": "Phones with some of the capabilities of modern computers also have the same kind of drawbacks. These phones are commonly referred to as smartphones. They have both phone and personal digital assistant (PDA) functionality. Typical to these devices is to have a wide selection of different connectivity options from general packet radio service (GPRS) data transfer to multi media messages (MMS) and wireless local area network (WLAN) capabilities. They also have standardized operating systems, which makes smartphones a viable platform for malware writers. Since the design of the operating systems is recent, many common security holes and vulnerabilities have been taken into account during the design. However, these precautions have not fully protected these devices. Even now, when smartphones are not that common, there is a handful of viruses for them. In this paper we will discuss some of the most typical viruses in the mobile environment and propose guidelines and predictions for the future.",
"title": ""
},
{
"docid": "a791f5339b1a49567581cd64a1c678c8",
"text": "Making data to be more connected is one of the goals of Semantic Technology. Therefore, relational data model as one of important data resource type, is needed to be mapped and converted to graph model. In this paper we focus in mapping and converting without semantically loss, by considering semantic abstraction of the real world, which has been ignored in some previous researches. As a graph schema model, it can be implemented in graph database or linked data in RDF/OWL format. This approach studies that relationship should be paid more attention in mapping and converting because, often be found a gap semantic abstraction during those processes. In our small experiment shows that our idea can map and convert relational model to graph model without semantically loss.",
"title": ""
},
{
"docid": "f0958d2c952c7140c998fa13a2bf4374",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "d1515b3c475989e3c3584e02c0d5c329",
"text": "Sexting has received increasing scholarly and media attention. Especially, minors’ engagement in this behaviour is a source of concern. As adolescents are highly sensitive about their image among peers and prone to peer influence, the present study implemented the prototype willingness model in order to assess how perceptions of peers engaging in sexting possibly influence adolescents’ willingness to send sexting messages. A survey was conducted among 217 15to 19-year-olds. A total of 18% of respondents had engaged in sexting in the 2 months preceding the study. Analyses further revealed that the subjective norm was the strongest predictor of sexting intention, followed by behavioural willingness and attitude towards sexting. Additionally, the more favourable young people evaluated the prototype of a person engaging in sexting and the higher they assessed their similarity with this prototype, the more they were willing to send sexting messages. Differences were also found based on gender, relationship status and need for popularity. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "977f7723cde3baa1d98ca99cd9ed8881",
"text": "Identity Crime is well known, established, and costly. Identity Crime is the term used to refer to all types of crime in which someone wrongfully obtains and uses another person’s personal data in some way that involves fraud or deception, typically for economic gain. Forgery and use of fraudulent identity documents are major enablers of Identity Fraud. It has affected the e-commerce. It is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of lots of money worldwide each year. Also along with transaction the application domain such as credit application is hit by this crime. These are growing concerns for not only governmental bodies but business organizations also all over the world. This paper gives a brief summary of the identity fraud. Also it discusses various data mining techniques used to overcome it.",
"title": ""
},
{
"docid": "329420b8b13e8c315d341e382419315a",
"text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.",
"title": ""
},
{
"docid": "39a59eac80c6f4621971399dde2fbb7f",
"text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "d717a5955faf08583b946385cf9f41d3",
"text": "Spasticity is a prevalent and potentially disabling symptom common in individuals with multiple sclerosis. Adequate evaluation and management of spasticity requires a careful assessment of the patient's history to determine functional impact of spasticity and potential exacerbating factors, and physical examination to determine the extent of the condition and culpable muscles. A host of options for spasticity management are available: therapeutic exercise, physical modalities, complementary/alternative medicine interventions, oral medications, chemodenervation, and implantation of an intrathecal baclofen pump. Choice of treatment hinges on a combination of the extent of symptoms, patient preference, and availability of services.",
"title": ""
},
{
"docid": "5b56288bb7b49f18148f28798cfd8129",
"text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.",
"title": ""
},
{
"docid": "d60f7144d7321567136aabdf8cc1ea04",
"text": "The higher variability introduced by distributed generation leads to fast changes in the aggregate load composition, and thus in the power response during voltage variations. The smart transformer, a power electronics-based distribution transformer with advanced control functionalities, can exploit the load dependence on voltage for providing services to the distribution and transmission grids. In this paper, two possible applications are proposed: 1) the smart transformer overload control by means of voltage control action and 2) the soft load reduction method, that reduces load consumption avoiding the load disconnection. These services depend on the correct identification of load dependence on voltage, which the smart transformer evaluates in real time based on load measurements. The effect of the distributed generation on net load sensitivity has been derived and demonstrated with the control hardware in loop evaluation by means of a real time digital simulator.",
"title": ""
},
{
"docid": "85bc241c03d417099aa155766e6a1421",
"text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.",
"title": ""
},
{
"docid": "001d2da1fbdaf2c49311f6e68b245076",
"text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.",
"title": ""
},
{
"docid": "940e7dc630b7dcbe097ade7abb2883a4",
"text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.",
"title": ""
},
{
"docid": "645f49ff21d31bb99cce9f05449df0d7",
"text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.",
"title": ""
},
{
"docid": "5bf9aeb37fc1a82420b2ff4136f547d0",
"text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.",
"title": ""
},
{
"docid": "93af342862b02d12463fc452834b6717",
"text": "The posterior cerebral artery (PCA) has been noted in literature to have anatomical variations, specifically fenestration. Cerebral arteries with fenestrations are uncommon, especially when associated with other vascular pathologies. We report a case here of fenestrations within the P1 segment of the right PCA associated with a right middle cerebral artery (MCA) aneurysm in an elder adult male who presented with a new onset of headaches. The patient was treated with vascular clipping of the MCA and has recovered well. Identifying anatomical variations with appropriate imaging is of particular importance in neuro-interventional procedures as it may have an impact on the procedure itself and consequently post-interventional outcomes. Categories: Neurology, Neurosurgery",
"title": ""
},
{
"docid": "3361e6c7a448e69a73e8b3e879815386",
"text": "The neck is not only the first anatomical area to show aging but also contributes to the persona of the individual. The understanding the aging process of the neck is essential for neck rejuvenation. Multiple neck rejuvenation techniques have been reported in the literature. In 1974, Skoog [1] described the anatomy of the superficial musculoaponeurotic system (SMAS) and its role in the aging of the neck. Recently, many patients have expressed interest in minimally invasive surgery with a low risk of complications and short recovery period. The use of thread for neck rejuvenation and the concept of the suture suspension neck lift have become widespread as a convenient and effective procedure; nevertheless, complications have also been reported such as recurrence, inadequate correction, and palpability of the sutures. In this study, we analyzed a new type of thread lift: elastic lift that uses elastic thread (Elasticum; Korpo SRL, Genova, Italy). We already use this new technique for the midface lift and can confirm its efficacy and safety in that context. The purpose of this study was to evaluate the outcomes and safety of the elastic lift technique for neck region lifting.",
"title": ""
},
{
"docid": "33ad7f5618d356b5d28b887f30e3ba84",
"text": "BACKGROUND\nHaving cancer may result in extensive emotional, physical and social suffering. Music interventions have been used to alleviate symptoms and treatment side effects in cancer patients.\n\n\nOBJECTIVES\nTo compare the effects of music therapy or music medicine interventions and standard care with standard care alone, or standard care and other interventions in patients with cancer.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, Issue 10), MEDLINE, EMBASE, CINAHL, PsycINFO, LILACS, Science Citation Index, CancerLit, www.musictherapyworld.net, CAIRSS, Proquest Digital Dissertations, ClinicalTrials.gov, Current Controlled Trials, and the National Research Register. All databases were searched from their start date to September 2010. We handsearched music therapy journals and reference lists and contacted experts. There was no language restriction.\n\n\nSELECTION CRITERIA\nWe included all randomized controlled trials (RCTs) and quasi-randomized trials of music interventions for improving psychological and physical outcomes in patients with cancer. Participants undergoing biopsy and aspiration for diagnostic purposes were excluded.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted the data and assessed the risk of bias. Where possible, results were presented in meta analyses using mean differences and standardized mean differences. Post-test scores were used. In cases of significant baseline difference, we used change scores.\n\n\nMAIN RESULTS\nWe included 30 trials with a total of 1891 participants. We included music therapy interventions, offered by trained music therapists, as well as listening to pre-recorded music, offered by medical staff. The results suggest that music interventions may have a beneficial effect on anxiety in people with cancer, with a reported average anxiety reduction of 11.20 units (95% confidence interval (CI) -19.59 to -2.82, P = 0.009) on the STAI-S scale and -0.61 standardized units (95% CI -0.97 to -0.26, P = 0.0007) on other anxiety scales. Results also suggested a positive impact on mood (standardised mean difference (SMD) = 0.42, 95% CI 0.03 to 0.81, P = 0.03), but no support was found for depression.Music interventions may lead to small reductions in heart rate, respiratory rate, and blood pressure. A moderate pain-reducing effect was found (SMD = -0.59, 95% CI -0.92 to -0.27, P = 0.0003), but no strong evidence was found for enhancement of fatigue or physical status. The pooled estimate of two trials suggested a beneficial effect of music therapy on patients' quality of life (QoL) (SMD = 1.02, 95% CI 0.58 to 1.47, P = 0.00001).No conclusions could be drawn regarding the effect of music interventions on distress, body image, oxygen saturation level, immunologic functioning, spirituality, and communication outcomes.Seventeen trials used listening to pre-recorded music and 13 trials used music therapy interventions that actively engaged the patients. Not all studies included the same outcomes and due to the small number of studies per outcome, we could not compare the effectiveness of music medicine interventions with that of music therapy interventions.\n\n\nAUTHORS' CONCLUSIONS\nThis systematic review indicates that music interventions may have beneficial effects on anxiety, pain, mood, and QoL in people with cancer. Furthermore, music may have a small effect on heart rate, respiratory rate, and blood pressure. Most trials were at high risk of bias and, therefore, these results need to be interpreted with caution.",
"title": ""
}
] | scidocsrr |
d7f41168e016d53e714ede27eb6a19ba | Characteristics of knowledge, people engaged in knowledge transfer and knowledge stickiness: evidence from Chinese R&D team | [
{
"docid": "adcaa15fd8f1e7887a05d3cb1cd47183",
"text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "cbf878cd5fbf898bdf88a2fcf5024826",
"text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.",
"title": ""
}
] | [
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "d6e565c0123049b9e11692b713674ccf",
"text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.",
"title": ""
},
{
"docid": "71ac262257aacc838b2027fe061a2f56",
"text": "In Part I of this paper, a novel motion simulator platform is presented, the DLR Robot Motion Simulator with 7 degrees of freedom (DOF). In this Part II, a path-planning algorithm for mentioned platform will be discussed. By replacing the widely used hexapod kinematics by an antropomorhic, industrial robot arm mounted on a standard linear axis, a comparably larger workspace at lower hardware costs can be achieved. But the serial, redundant kinematics of the industrial robot system also introduces challenges for the path-planning as singularities in the workspace, varying movability of the system and the handling of robot system's kinematical redundancy. By solving an optimization problem with constraints in every sampling step, a feasible trajectory can be generated, fulfilling the task of motion cueing, while respecting the robot's dynamic constraints.",
"title": ""
},
{
"docid": "02d8c55750904b7f4794139bcfa51693",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "06e708b307a0518ec681e8a6d272d558",
"text": "Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6–10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.",
"title": ""
},
{
"docid": "4a6ee237d0ebebce741e40279009a333",
"text": "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow.",
"title": ""
},
{
"docid": "75aa71e270d85df73fa97336d2a6b713",
"text": "Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.",
"title": ""
},
{
"docid": "d0b29493c64e787ed88ad8166d691c3d",
"text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.",
"title": ""
},
{
"docid": "8c864e944afa69696cfb4f87c4344a07",
"text": "In this study, we examined physician acceptance behavior of the electronic medical record (EMR) exchange. Although several prior studies have focused on factors that affect the adoption or use of EMRs, empirical study that captures the success factors that encourage physicians to adopt the EMR exchange is limited. Therefore, drawing on institutional trust integrated with the decomposed theory of planned behavior (TPB) model, we propose a theoretical model to examine physician intentions of using the EMR exchange. A field survey was conducted in Taiwan to collect data from physicians. Structural equation modeling (SEM) using the partial least squares (PLS) method was employed to test the research model. The results showed that the usage intention of physicians is significantly influenced by 4 factors (i.e., attitude, subjective norm, perceived behavior control, and institutional trust). These 4 factors were assessed by their perceived usefulness and compatibility, facilitating conditions and self-efficacy, situational normality, and structural assurance, respectively. The results also indicated that institutional trust integrated with the decomposed TPB model provides an improved method for predicting physician intentions to use the EMR exchange. Finally, the implications of this study are discussed.",
"title": ""
},
{
"docid": "d5955aa10ee95527bd7a3d13479d4018",
"text": "As urbanisation increases globally and the natural environment becomes increasingly fragmented, the importance of urban green spaces for biodiversity conservation grows. In many countries, private gardens are a major component of urban green space and can provide considerable biodiversity benefits. Gardens and adjacent habitats form interconnected networks and a landscape ecology framework is necessary to understand the relationship between the spatial configuration of garden patches and their constituent biodiversity. A scale-dependent tension is apparent in garden management, whereby the individual garden is much smaller than the unit of management needed to retain viable populations. To overcome this, here we suggest mechanisms for encouraging 'wildlife-friendly' management of collections of gardens across scales from the neighbourhood to the city.",
"title": ""
},
{
"docid": "6478097f207482543c0db12b518be82b",
"text": "What is a good test case? One that reveals potential defects with good cost-effectiveness. We provide a generic model of faults and failures, formalize it, and present its various methodological usages for test case generation.",
"title": ""
},
{
"docid": "0e803e853422328aeef59e426410df48",
"text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.",
"title": ""
},
{
"docid": "1e972c454587c5a3b24386f2b6ffc8fa",
"text": "Three classic cases and one exceptional case are reported. The unique case of decapitation took place in a traffic accident, while the others were seen after homicide, vehicle-assisted suicide, and after long-jump hanging. Thorough scene examinations were performed, and photographs from the scene were available in all cases. Through the autopsy of each case, the mechanism for the decapitation in each case was revealed. The severance lines were through the neck and the cervical vertebral column, except for in the motor vehicle accident case, where the base of skull was fractured. This case was also unusual as the mechanism was blunt force. In the homicide case, the mechanism was the use of a knife combined with a saw, while in the two last cases, a ligature made the cut through the neck. The different mechanisms in these decapitations are suggested.",
"title": ""
},
{
"docid": "d4ac52a52e780184359289ecb41e321e",
"text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.",
"title": ""
},
{
"docid": "1547a67fd88ac720f4521a206a26dff3",
"text": "A core business in the fashion industry is the understanding and prediction of customer needs and trends. Search engines and social networks are at the same time a fundamental bridge and a costly middleman between the customer’s purchase intention and the retailer. To better exploit Europe’s distinctive characteristics e.g., multiple languages, fashion and cultural differences, it is pivotal to reduce retailers’ dependence to search engines. This goal can be achieved by harnessing various data channels (manufacturers and distribution networks, online shops, large retailers, social media, market observers, call centers, press/magazines etc.) that retailers can leverage in order to gain more insight about potential buyers, and on the industry trends as a whole. This can enable the creation of novel on-line shopping experiences, the detection of influencers, and the prediction of upcoming fashion trends. In this paper, we provide an overview of the main research challenges and an analysis of the most promising technological solutions that we are investigating in the FashionBrain project.",
"title": ""
},
{
"docid": "5dce9f3c1ec0cb65ec98c9c5ecdaf549",
"text": "As organizational environments become more global, dynamic, and competitive, contradictory demands intensify. To understand and explain such tensions, academics and practitioners are increasingly adopting a paradox lens. We review the paradox literature, categorizing types and highlighting fundamental debates. We then present a dynamic equilibrium model of organizing, which depicts how cyclical responses to paradoxical tensions enable sustainability—peak performance in the present that enables success in the future. This review and the model provide the foundation of a theory of paradox.",
"title": ""
},
{
"docid": "909d9d1b9054586afc4b303e94acae73",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "1d1fdf869a30a8ba9437e3b18bc8c872",
"text": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.",
"title": ""
},
{
"docid": "951ad18af2b3c9b0ca06147b0c804f65",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "f0ea768c020a99ac3ed144b76893dbd9",
"text": "This paper focuses on tracking dynamic targets using a low cost, commercially available drone. The approach presented utilizes a computationally simple potential field controller expanded to operate not only on relative positions, but also relative velocities. A brief background on potential field methods is given, and the design and implementation of the proposed controller is presented. Experimental results using an external motion capture system for localization demonstrate the ability of the drone to track a dynamic target in real time as well as avoid obstacles in its way.",
"title": ""
}
] | scidocsrr |
aac8e2bf092df3bb768346be81c23efc | Direct Ray Tracing of Displacement Mapped Triangles | [
{
"docid": "b9bb07dd039c0542a7309f2291732f82",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "42d1368bf2c5e659f9e9a215e1ebbd4c",
"text": "The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.",
"title": ""
}
] | [
{
"docid": "21b04c71f6c87b18f544f6b3f6570dd7",
"text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>",
"title": ""
},
{
"docid": "16d1ade9aa0c9966905441752c9ea90c",
"text": "Many agricultural studies rely on infrared sensors for remote measurement of surface temperatures for crop status monitoring and estimating sensible and latent heat fluxes. Historically, applications for these non-contact thermometers employed the use of hand-held or stationary industrial infrared thermometers (IRTs) wired to data loggers. Wireless sensors in agricultural applications are a practical alternative, but the availability of low cost wireless IRTs is limited. In this study, we designed prototype narrow (10◦) field of view wireless infrared sensor modules and evaluated the performance of the IRT sensor by comparing temperature readings of an object (Tobj) against a blackbody calibrator in a controlled temperature room at ambient temperatures of 15 ◦C, 25 ◦C, 35 ◦C, and 45 ◦C. Additional comparative readings were taken over plant and soil samples alongside a hand-held IRT and over an isothermal target in the outdoors next to a wired IRT. The average root mean square error (RMSE) and mean absolute error (MAE) between the collected IRT object temperature readings and the blackbody target ranged between 0.10 and 0.79 ◦C. The wireless IRT readings also compared well with the hand-held IRT and wired industrial IRT. Additional tests performed to investigate the influence of direct radiation on IRT measurements indicated that housing the sensor in white polyvinyl chloride provided ample shielding for the self-compensating circuitry of the IR detector. The relatively low cost of the wireless IRT modules and repeatable measurements against a blackbody calibrator and commercial IR thermometers demonstrated that these wireless prototypes have the potential to provide accurate surface radiometric temperature readings in outdoor applications. Further studies are needed to thoroughly test radio frequency communication and power consumption characteristics in an outdoor setting. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "c601040737c42abcef996e027fabc8cf",
"text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.",
"title": ""
},
{
"docid": "ee8a549332f184a4be0a77dae0437bbc",
"text": "Extracting question-answer pairs from online forums is a meaningful work due to the huge amount of valuable user generated resource contained in forums. In this paper we consider the problem of extracting Chinese question-answer pairs for the first time. We present a strategy to detect Chinese questions and their answers. We propose a sequential rule based method to find questions in a forum thread, then we adopt nontextual features based on forum structure to improve the performance of answer detecting in the same thread. Experimental results show that our techniques are very effective.",
"title": ""
},
{
"docid": "de0d2808f949723f1c0ee8e87052f889",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "e1366b0128c4d76addd57bb2b02a19b5",
"text": "OBJECTIVE\nThe present study examined the association between child sexual abuse (CSA) and sexual health outcomes in young adult women. Maladaptive coping strategies and optimism were investigated as possible mediators and moderators of this relationship.\n\n\nMETHOD\nData regarding sexual abuse, coping, optimism and various sexual health outcomes were collected using self-report and computerized questionnaires with a sample of 889 young adult women from the province of Quebec aged 20-23 years old.\n\n\nRESULTS\nA total of 31% of adult women reported a history of CSA. Women reporting a severe CSA were more likely to report more adverse sexual health outcomes including suffering from sexual problems and engaging in more high-risk sexual behaviors. CSA survivors involving touching only were at greater risk of reporting more negative sexual self-concept such as experiencing negative feelings during sex than were non-abused participants. Results indicated that emotion-oriented coping mediated outcomes related to negative sexual self-concept while optimism mediated outcomes related to both, negative sexual self-concept and high-risk sexual behaviors. No support was found for any of the proposed moderation models.\n\n\nCONCLUSIONS\nSurvivors of more severe CSA are more likely to engage in high-risk sexual behaviors that are potentially harmful to their health as well as to experience more sexual problems than women without a history of sexual victimization. Personal factors, namely emotion-oriented coping and optimism, mediated some sexual health outcomes in sexually abused women. The results suggest that maladaptive coping strategies and optimism regarding the future may be important targets for interventions optimizing sexual health and sexual well-being in CSA survivors.",
"title": ""
},
{
"docid": "c3b3d0343f0ed86de7a3c704b0164382",
"text": "A broadband design of the microstrip-fed modified quasi-Yagi antenna is presented. The two arms of the driving dipole are connected separately to two microstrip sections tapered from the feeding microstrip line and its truncated ground plane. The end points of the two tapered sections can be suitably adjusted to obtain a 10-dB return loss bandwidth more than 50%. Measured radiation patterns are end-fire and the in-band peak gains range from 3.9 to 7.2 dBi. Details of the antenna design and the experimental results are presented and discussed.",
"title": ""
},
{
"docid": "e849cdf1237792fdf7bcded91c35c398",
"text": "Purpose – System usage and user satisfaction are widely accepted and used as surrogate measures of IS success. Past studies attempted to explore the relationship between system usage and user satisfaction but findings are mixed, inconclusive and misleading. The main objective of this research is to better understand and explain the nature and strength of the relationship between system usage and user satisfaction by resolving the existing inconsistencies in the IS research and to validate this relationship empirically as defined in Delone and McLean’s IS success model. Design/methodology/approach – “Meta-analysis” as a research approach was adopted because of its suitability regarding the nature of the research and its capability of dealing with exploring relationships that may be obscured in other approaches to synthesize research findings. Meta-analysis findings contributed towards better explaining the relationship between system usage and user satisfaction, the main objectives of this research. Findings – This research examines critically the past findings and resolves the existing inconsistencies. The meta-analysis findings explain that there exists a significant positive relationship between “system usage” and “user satisfaction” (i.e. r 1⁄4 0:2555) although not very strong. This research empirically validates this relationship that has already been proposed by Delone and McLean in their IS success model. Provides a guide for future research to explore the mediating variables that might affect the relationship between system usage and user satisfaction. Originality/value – This research better explains the relationship between system usage and user satisfaction by resolving contradictory findings in the past research and contributes to the existing body of knowledge relating to IS success.",
"title": ""
},
{
"docid": "b8808d637dcb8bbb430d68196587b3a4",
"text": "Crowd sourcing is based on a simple but powerful concept: Virtually anyone has the potential to plug in valuable information. The concept revolves around large groups of people or community handling tasks that have traditionally been associated with a specialist or small group of experts. With the advent of the smart devices, many mobile applications are already tapping into crowd sourcing to report community issues and traffic problems, but more can be done. While most of these applications work well for the average user, it neglects the information needs of particular user communities. We present CROWDSAFE, a novel convergence of Internet crowd sourcing and portable smart devices to enable real time, location based crime incident searching and reporting. It is targeted to users who are interested in crime information. The system leverages crowd sourced data to provide novel features such as a Safety Router and value added crime analytics. We demonstrate the system by using crime data in the metropolitan Washington DC area to show the effectiveness of our approach. Also highlighted is its ability to facilitate greater collaboration between citizens and civic authorities. Such collaboration shall foster greater innovation to turn crime data analysis into smarter and safe decisions for the public.",
"title": ""
},
{
"docid": "bdbd3d65c79e4f22d2e85ac4137ee67a",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "141ecc1fe0c33bfd647e4d62956f0212",
"text": "a Emerging Markets Research Centre (EMaRC), School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK b Section of Information & Communication Technology, Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands c Nottingham Business School, Nottingham Trent University, UK d School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK e School of Management, Swansea University Bay Campus, Fabian Way, Crymlyn Burrows, Swansea, SA1 8EN, Wales, UK",
"title": ""
},
{
"docid": "9cc04311cc991af56a69267a5a22aa37",
"text": "Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirely different classes and thus it becomes hard to detect the adversarial samples. Most of the prior works have been focused on synthesizing adversarial samples in the image domain. In this paper, we propose a new method of crafting adversarial text samples by modification of the original samples. Modifications of the original text samples are done by deleting or replacing the important or salient words in the text or by introducing new words in the text sample. Our algorithm works best for the datasets which have sub-categories within each of the classes of examples. While crafting adversarial samples, one of the key constraint is to generate meaningful sentences which can at pass off as legitimate from language (English) viewpoint. Experimental results on IMDB movie review dataset for sentiment analysis and Twitter dataset for gender detection show the efficiency of our proposed method.",
"title": ""
},
{
"docid": "805ff3489d9bc145a0a8b91ce58ce3f9",
"text": "The present experiment was designed to test the theory that psychological procedures achieve changes in behavior by altering the level and strength of self-efficacy. In this formulation, perceived self-efficacy. In this formulation, perceived self-efficacy influences level of performance by enhancing intensity and persistence of effort. Adult phobics were administered treatments based upon either performance mastery experiences, vicarious experiences., or they received no treatment. Their efficacy expectations and approach behavior toward threats differing on a similarity dimension were measured before and after treatment. In accord with our prediction, the mastery-based treatment produced higher, stronger, and more generalized expectations of personal efficacy than did the treatment relying solely upon vicarious experiences. Results of a microanalysis further confirm the hypothesized relationship between self-efficacy and behavioral change. Self-efficacy was a uniformly accurate predictor of performance on tasks of varying difficulty with different threats regardless of whether the changes in self-efficacy were produced through enactive mastery or by vicarious experience alone.",
"title": ""
},
{
"docid": "f573c79dde4ce12c234df084dea149b4",
"text": "The presence of geometric details on object surfaces dramatically changes the way light interacts with these surfaces. Although synthesizing realistic pictures requires simulating this interaction as faithfully as possible, explicitly modeling all the small details tends to be impractical. To address these issues, an image-based technique called relief mapping has recently been introduced for adding per-fragment details onto arbitrary polygonal models (Policarpo et al. 2005). The technique has been further extended to render correct silhouettes (Oliveira and Policarpo 2005) and to handle non-height-field surface details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersection is performed using a binary search, which refines the result produced by some linear search procedure. While the binary search converges very fast, the linear search (required to avoid missing large structures) is prone to aliasing, by possibly missing some thin structures, as is evident in Figure 18-1a. Several space-leaping techniques have since been proposed to accelerate the ray-height-field intersection and to minimize the occurrence of aliasing (Donnelly 2005, Dummer 2006, Baboud and Décoret 2006). Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the intersection calculation for the average case and avoids skipping height-field structures by using some precomputed data (a cone map). However, because CSM uses a conservative approach, the rays tend to stop before the actual surface, which introduces different Relaxed Cone Stepping for Relief Mapping",
"title": ""
},
{
"docid": "4c462c0b8fe98bfb9c79f9d1bc497748",
"text": "This brief shows that a conventional semi-custom design-flow based on a positive feedback adiabatic logic (PFAL) cell library allows any VLSI designer to design and verify complex adiabatic systems (e.g., arithmetic units) in a short time and easy way, thus, enjoying the energy reduction benefits of adiabatic logic. A family of semi-custom PFAL carry lookahead adders and parallel multipliers were designed in a 0.6-/spl mu/m CMOS technology and verified. Post-layout simulations show that semi-custom adiabatic arithmetic units can save energy a factor 17 at 10 MHz and about 7 at 100 MHz, as compared to a logically equivalent static CMOS implementation. The energy saving obtained is also better if compared to other custom adiabatic circuit realizations and maintains high values (3/spl divide/6) even when the losses in power-clock generation are considered.",
"title": ""
},
{
"docid": "20f98a15433514dc5aa76110f68a71ba",
"text": "We describe a case of secondary syphilis of the tongue in which the main clinical presentation of the disease was similar to oral hairy leukoplakia. In a man who was HIV seronegative, the first symptom was a dryness of the throat followed by a feeling of foreign body in the tongue. Lesions were painful without cutaneous manifestations of secondary syphilis. IgM-fluorescent treponemal antibody test and typical serologic parameters promptly led to the diagnosis of secondary syphilis. We initiated an appropriate antibiotic therapy using benzathine penicillin, which induced healing of the tongue lesions. The differential diagnosis of this lesion may include oral squamous carcinoma, leukoplakia, candidosis, lichen planus, and, especially, hairy oral leukoplakia. This case report emphasizes the importance of considering secondary syphilis in the differential diagnosis of hairy oral leukoplakia. Depending on the clinical picture, the possibility of syphilis should not be overlooked in the differential diagnosis of many diseases of the oral mucosa.",
"title": ""
},
{
"docid": "479c250bd9284ab1a216a11fa5199f61",
"text": "Two Gram-stain-negative, non-motile, non-spore-forming, rod-shaped bacterial strains, designated 3B-2(T) and 10AO(T), were isolated from a sand sample collected from the west coast of the Korean peninsula by using low-nutrient media, and their taxonomic positions were investigated in a polyphasic study. The strains did not grow on marine agar. They grew optimally at 30 °C and pH 6.5-7.5. Strains 3B-2(T) and 10AO(T) shared 97.5 % 16S rRNA gene sequence similarity and mean level of DNA-DNA relatedness of 12 %. In phylogenetic trees based on 16S rRNA gene sequences, strains 3B-2(T) and 10AO(T), together with several uncultured bacterial clones, formed independent lineages within the evolutionary radiation encompassed by the phylum Bacteroidetes. Strains 3B-2(T) and 10AO(T) contained MK-7 as the predominant menaquinone and iso-C(15 : 0) and C(16 : 1)ω5c as the major fatty acids. The DNA G+C contents of strains 3B-2(T) and 10AO(T) were 42.8 and 44.6 mol%, respectively. Strains 3B-2(T) and 10AO(T) exhibited very low levels of 16S rRNA gene sequence similarity (<85.0 %) to the type strains of recognized bacterial species. These data were sufficient to support the proposal that the novel strains should be differentiated from previously known genera of the phylum Bacteroidetes. On the basis of the data presented, we suggest that strains 3B-2(T) and 10AO(T) represent two distinct novel species of a new genus, for which the names Ohtaekwangia koreensis gen. nov., sp. nov. (the type species; type strain 3B-2(T) = KCTC 23018(T) = CCUG 58939(T)) and Ohtaekwangia kribbensis sp. nov. (type strain 10AO(T) = KCTC 23019(T) = CCUG 58938(T)) are proposed.",
"title": ""
},
{
"docid": "a72837815d412113856077a6dc7a868d",
"text": "fast align is a simple, fast, and efficient approach for word alignment based on the IBM model 2. fast align performs well for language pairs with relatively similar word orders; however, it does not perform well for language pairs with drastically different word orders. We propose a segmenting-reversing reordering process to solve this problem by alternately applying fast align and reordering source sentences during training. Experimental results with JapaneseEnglish translation demonstrate that the proposed approach improves the performance of fast align significantly without the loss of efficiency. Experiments using other languages are also reported.",
"title": ""
},
{
"docid": "239e37736832f6f0de050ed1749ba648",
"text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.",
"title": ""
},
{
"docid": "8ce46c28f967ef5ab76548630983748a",
"text": "Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.",
"title": ""
}
] | scidocsrr |
ef31e3bb3c357c2731f139175f9f9126 | An active compliance controller for quadruped trotting | [
{
"docid": "a258c6b5abf18cb3880e4bc7a436c887",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "1495ed50a24703566b2bda35d7ec4931",
"text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
}
] | [
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "78c89f8aec24989737575c10b6bbad90",
"text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.",
"title": ""
},
{
"docid": "7b44c4ec18d01f46fdd513780ba97963",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "7e422bc9e691d552543c245e7c154cbf",
"text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.",
"title": ""
},
{
"docid": "f6099a1e6641d0a93c764efef120dd53",
"text": "For the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the recent surge in adoption of embedded devices and the IoT revolution are rapidly changing the malware landscape. Embedded devices are profoundly different than traditional personal computers. In fact, while personal computers run predominantly on x86-flavored architectures, embedded systems rely on a variety of different architectures. In turn, this aspect causes a large number of these systems to run some variants of the Linux operating system, pushing malicious actors to give birth to \"\"Linux malware.\"\" To the best of our knowledge, there is currently no comprehensive study attempting to characterize, analyze, and understand Linux malware. The majority of resources on the topic are available as sparse reports often published as blog posts, while the few systematic studies focused on the analysis of specific families of malware (e.g., the Mirai botnet) mainly by looking at their network-level behavior, thus leaving the main challenges of analyzing Linux malware unaddressed. This work constitutes the first step towards filling this gap. After a systematic exploration of the challenges involved in the process, we present the design and implementation details of the first malware analysis pipeline specifically tailored for Linux malware. We then present the results of the first large-scale measurement study conducted on 10,548 malware samples (collected over a time frame of one year) documenting detailed statistics and insights that can help directing future work in the area.",
"title": ""
},
{
"docid": "abc48ae19e2ea1e1bb296ff0ccd492a2",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "62cf2ae97e48e6b57139f305d616ec1b",
"text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a",
"title": ""
},
{
"docid": "bfe76736623dfc3271be4856f5dc2eef",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "03c74ae78bfe862499c4cb1e18a58ae7",
"text": "Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death.",
"title": ""
},
{
"docid": "29ce9730d55b55b84e195983a8506e5c",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "e244cbd076ea62b4d720378c2adf4438",
"text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.",
"title": ""
},
{
"docid": "8baddf0d82411d18a77be03759101c82",
"text": "Deep convolutional neural networks (DCNNs) have been successfully used in many computer vision tasks. Previous works on DCNN acceleration usually use a fixed computation pattern for diverse DCNN models, leading to imbalance between power efficiency and performance. We solve this problem by designing a DCNN acceleration architecture called deep neural architecture (DNA), with reconfigurable computation patterns for different models. The computation pattern comprises a data reuse pattern and a convolution mapping method. For massive and different layer sizes, DNA reconfigures its data paths to support a hybrid data reuse pattern, which reduces total energy consumption by 5.9~8.4 times over conventional methods. For various convolution parameters, DNA reconfigures its computing resources to support a highly scalable convolution mapping method, which obtains 93% computing resource utilization on modern DCNNs. Finally, a layer-based scheduling framework is proposed to balance DNA’s power efficiency and performance for different DCNNs. DNA is implemented in the area of 16 mm2 at 65 nm. On the benchmarks, it achieves 194.4 GOPS at 200 MHz and consumes only 479 mW. The system-level power efficiency is 152.9 GOPS/W (considering DRAM access power), which outperforms the state-of-the-art designs by one to two orders.",
"title": ""
},
{
"docid": "4def0dc478dfb5ddb5a0ec59ec7433f5",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "29f8b647d8f8de484f2b8f164b9e5add",
"text": "is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from",
"title": ""
},
{
"docid": "528796e22fc248de78a91cc089467c04",
"text": "Automatic recognition of emotional states from human speech is a current research topic with a wide range. In this paper an attempt has been made to recognize and classify the speech emotion from three language databases, namely, Berlin, Japan and Thai emotion databases. Speech features consisting of Fundamental Frequency (F0), Energy, Zero Crossing Rate (ZCR), Linear Predictive Coding (LPC) and Mel Frequency Cepstral Coefficient (MFCC) from short-time wavelet signals are comprehensively investigated. In this regard, Support Vector Machines (SVM) is utilized as the classification model. Empirical experimentation shows that the combined features of F0, Energy and MFCC provide the highest accuracy on all databases provided using the linear kernel. It gives 89.80%, 93.57% and 98.00% classification accuracy for Berlin, Japan and Thai emotions databases, respectively.",
"title": ""
},
{
"docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7",
"text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.",
"title": ""
},
{
"docid": "5ee410ddc75170aa38c39281a8d86827",
"text": "Research in automotive safety leads to the conclusion that modern vehicle should utilize active and passive sensors for the recognition of the environment surrounding them. Thus, the development of tracking systems utilizing efficient state estimators is very important. In this case, problems such as moving platform carrying the sensor and maneuvering targets could introduce large errors in the state estimation and in some cases can lead to the divergence of the filter. In order to avoid sub-optimal performance, the unscented Kalman filter is chosen, while a new curvilinear model is applied which takes into account both the turn rate of the detected object and its tangential acceleration, leading to a more accurate modeling of its movement. The performance of the unscented filter using the proposed model in the case of automotive applications is proven to be superior compared to the performance of the extended and linear Kalman filter.",
"title": ""
},
{
"docid": "f47fcbd6412384b85ef458fd3e6b27f3",
"text": "In this paper, we consider positioning with observed-time-difference-of-arrival (OTDOA) for a device deployed in long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT) systems. We propose an iterative expectation- maximization based successive interference cancellation (EM-SIC) algorithm to jointly consider estimations of residual frequency- offset (FO), fading-channel taps and time-of- arrival (ToA) of the first arrival-path for each of the detected cells. In order to design a low complexity ToA detector and also due to the limits of low-cost analog circuits, we assume an NB-IoT device working at a low-sampling rate such as 1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect ToA, based on which OTDOA can be calculated. In a first stage, after running the EM-SIC block a predefined number of iterations, a coarse ToA is estimated for each of the detected cells. Then in a second stage, to improve the ToA resolution, a low-pass filter is utilized to interpolate the correlations of time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate such as 30.72 MHz. To keep low-complexity, only the correlations inside a small search window centered at the coarse ToA estimates are upsampled. Then, the refined ToAs are estimated based on upsampled correlations. If at least three cells are detected, with OTDOA and the locations of detected cell sites, the position of the NB-IoT device can be estimated. We show through numerical simulations that, the proposed EM-SIC based ToA detector is robust against impairments introduced by inter-cell interference, fading-channel and residual FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional ToA detectors that do not consider these impairments when positioning a device.",
"title": ""
},
{
"docid": "36d7f776d7297f67a136825e9628effc",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
}
] | scidocsrr |
1bb113abb6663a85e1fe4ff40f104804 | Single Switched Capacitor Battery Balancing System Enhancements | [
{
"docid": "b6bbd83da68fbf1d964503fb611a2be5",
"text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.",
"title": ""
},
{
"docid": "90c3543eca7a689188725e610e106ce9",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "b05df5ff16750040a499f3c62fed2e3f",
"text": "The automobile industry is progressing toward hybrid, plug-in hybrid, and fully electric vehicles in their future car models. The energy storage unit is one of the most important blocks in the power train of future electric-drive vehicles. Batteries and/or ultracapacitors are the most prominent storage systems utilized so far. Hence, their reliability during the lifetime of the vehicle is of great importance. Charge equalization of series-connected batteries or ultracapacitors is essential due to the capacity imbalances stemming from manufacturing, ensuing driving environment, and operational usage. Double-tiered capacitive charge shuttling technique is introduced and applied to a battery system in order to balance the battery-cell voltages. Parameters in the system are varied, and their effects on the performance of the system are determined. Results are compared to a single-tiered approach. MATLAB simulation shows a substantial improvement in charge transport using the new topology. Experimental results verifying simulation are presented.",
"title": ""
}
] | [
{
"docid": "be4defd26cf7c7a29a85da2e15132be9",
"text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.",
"title": ""
},
{
"docid": "e947cf1b4670c10f2453b9012078c3b5",
"text": "BACKGROUND\nDyadic suicide pacts are cases in which two individuals (and very rarely more) agree to die together. These account for fewer than 1% of all completed suicides.\n\n\nOBJECTIVE\nThe authors describe two men in a long-term domestic partnership who entered into a suicide pact and, despite utilizing a high-lethality method (simultaneous arm amputation with a power saw), survived.\n\n\nMETHOD\nThe authors investigated the psychiatric, psychological, and social causes of suicide pacts by delving into the history of these two participants, who displayed a very high degree of suicidal intent. Psychiatric interviews and a family conference call, along with the strong support of one patient's family, were elicited.\n\n\nRESULTS\nThe patients, both HIV-positive, showed high levels of depression and hopelessness, as well as social isolation and financial hardship. With the support of his family, one patient was discharged to their care, while the other partner was hospitalized pending reunion with his partner.\n\n\nDISCUSSION\nThis case illustrates many of the key, defining features of suicide pacts that are carried out and also highlights the nature of the dependency relationship.",
"title": ""
},
{
"docid": "4073da56cc874ea71f5e8f9c1c376cf8",
"text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.",
"title": ""
},
{
"docid": "4ddbdf0217d13c8b349137f1e59910d6",
"text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.",
"title": ""
},
{
"docid": "94bd0b242079d2b82c141e9f117154f7",
"text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.",
"title": ""
},
{
"docid": "7364ae253ce5ace1df277f1d7f620861",
"text": "Recent advances in signal processing and the revolution by the mobile technologies have spurred several innovations in all the areas and albeit more so in home based tele-medicine. We used variational mode decomposition (VMD) based denoising on large-scale phonocardiogram (PCG) data sets and achieved better accuracy. We have also implemented a reliable, external hardware and mobile based phonocardiography system that uses VMD signal processing technique to denoise the PCG signal that visually displays the waveform and inform the end-user and send the data to cloud based analytics system.",
"title": ""
},
{
"docid": "f7424faa6dd97ebe93d1acfd5f0c9da9",
"text": "This work examines the implications of uncoupled intersections with local realworld topology and sensor setup on traffic light control approaches. Control approaches are evaluated with respect to: Traffic flow, fuel consumption and noise emission at intersections. The real-world road network of Friedrichshafen is depicted, preprocessed and the present traffic light controlled intersections are modeled with respect to state space and action space. Different strategies, containing fixed-time, gap-based and time-based control approaches as well as our deep reinforcement learning based control approach, are implemented and assessed. Our novel DRL approach allows for modeling the TLC action space, with respect to phase selection as well as selection of transition timings. It was found that real-world topologies, and thus irregularly arranged intersections have an influence on the performance of traffic light control approaches. This is even to be observed within the same intersection types (n-arm, m-phases). Moreover we could show, that these influences can be efficiently dealt with by our deep reinforcement learning based control approach.",
"title": ""
},
{
"docid": "b70a70896a3d904c25adb126b584a858",
"text": "A case of a fatal cardiac episode resulting from an unusual autoerotic practice involving the use of a vacuum cleaner, is presented. Scene investigation and autopsy findings are discussed.",
"title": ""
},
{
"docid": "4b878ffe2fd7b1f87e2f06321e5f03fa",
"text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.",
"title": ""
},
{
"docid": "aa5d8162801abcc81ac542f7f2a423e5",
"text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).",
"title": ""
},
{
"docid": "5d1e77b6b09ebac609f2e518b316bd49",
"text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.",
"title": ""
},
{
"docid": "c9c03474e9add95ebb0b89cacdb6c712",
"text": "We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.",
"title": ""
},
{
"docid": "59c16bb2ec81dfb0e27ff47ccae0a169",
"text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.",
"title": ""
},
{
"docid": "0e893315d6e9257f5a1e6e85291c89ef",
"text": "In unsupervised semantic role labeling, identifying the role of an argument is usually informed by its dependency relation with the predicate. In this work, we propose a neural model to learn argument embeddings from the context by explicitly incorporating dependency relations as multiplicative factors, which bias argument embeddings according to their dependency roles. Our model outperforms existing state-of-the-art embeddings in unsupervised semantic role induction on the CoNLL 2008 dataset and the SimLex999 word similarity task. Qualitative results demonstrate our model can effectively bias argument embeddings based on their dependency role.",
"title": ""
},
{
"docid": "95ca78f61a46f6e34edce6210d5e0939",
"text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.",
"title": ""
},
{
"docid": "c3e8960170cb72f711263e7503a56684",
"text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.",
"title": ""
},
{
"docid": "7251ff8a3ff1adbf13ddd62ab9a9c9c3",
"text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics",
"title": ""
},
{
"docid": "3f47acf3bd67849be29670a3236294c7",
"text": "The aims of this study were as follows: (a) to examine the possible presence of an identifiable group of stable victims of cyberbullying; (b) to analyze whether the stability of cybervictimization is associated with the perpetration of cyberbullying and bully–victim status (i.e., being only a bully, only a victim, or being both a bully and a victim); and (c) to test whether stable victims report a greater number of psychosocial problems compared to non-stable victims and uninvolved peers. A sample of 680 Spanish adolescents (410 girls) completed self-report measures on cyberbullying perpetration and victimization, depressive symptoms, and problematic alcohol use at two time points that were separated by one year. The results of cluster analyses suggested the existence of four distinct victimization profiles: ‘‘Stable-Victims,’’ who reported victimization at both Time 1 and Time 2 (5.8% of the sample), ‘‘Time 1-Victims,’’ and ‘‘Time 2-Victims,’’ who presented victimization only at one time (14.5% and 17.6%, respectively), and ‘‘Non-Victims,’’ who presented minimal victimization at both times (61.9% of the sample). Stable victims were more likely to fall into the ‘‘bully–victim’’ category and presented more cyberbullying perpetration than the rest of the groups. Overall, the Stable Victims group displayed higher scores of depressive symptoms and problematic alcohol use over time than the other groups, whereas the Non-Victims displayed the lowest of these scores. These findings have major implications for prevention and intervention efforts aimed at reducing cyberbullying and its consequences. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "038064c2998a5da8664be1ba493a0326",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
},
{
"docid": "ca9f48691e93b6282df2277f4cf8885e",
"text": "This paper presents a novel technique, anatomy, for publishing sensitive data. Anatomy releases all the quasi-identifier and sensitive values directly in two separate tables. Combined with a grouping mechanism, this approach protects privacy, and captures a large amount of correlation in the microdata. We develop a linear-time algorithm for computing anatomized tables that obey the l-diversity privacy requirement, and minimize the error of reconstructing the microdata. Extensive experiments confirm that our technique allows significantly more effective data analysis than the conventional publication method based on generalization. Specifically, anatomy permits aggregate reasoning with average error below 10%, which is lower than the error obtained from a generalized table by orders of magnitude.",
"title": ""
}
] | scidocsrr |
146547ed597a23462ff5fccb23c76181 | A vision-guided autonomous quadrotor in an air-ground multi-robot system | [
{
"docid": "5cdcb7073bd0f8e1b0affe5ffb4adfc7",
"text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.",
"title": ""
},
{
"docid": "cff9a7f38ca6699b235c774232a56f54",
"text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.",
"title": ""
}
] | [
{
"docid": "569a7cfcf7dd4cc5132dc7ffa107bfcf",
"text": "We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. Themost interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins’ and Prince’s classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-newdefinites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation. This paper will appear in Computational Linguistics.",
"title": ""
},
{
"docid": "89cba76ab33c66a3687481ea56e1e556",
"text": "With sustained growth of software complexity, finding security vulnerabilities in operating systems has become an important necessity. Nowadays, OS are shipped with thousands of binary executables. Unfortunately, methodologies and tools for an OS scale program testing within a limited time budget are still missing.\n In this paper we present an approach that uses lightweight static and dynamic features to predict if a test case is likely to contain a software vulnerability using machine learning techniques. To show the effectiveness of our approach, we set up a large experiment to detect easily exploitable memory corruptions using 1039 Debian programs obtained from its bug tracker, collected 138,308 unique execution traces and statically explored 76,083 different subsequences of function calls. We managed to predict with reasonable accuracy which programs contained dangerous memory corruptions.\n We also developed and implemented VDiscover, a tool that uses state-of-the-art Machine Learning techniques to predict vulnerabilities in test cases. Such tool will be released as open-source to encourage the research of vulnerability discovery at a large scale, together with VDiscovery, a public dataset that collects raw analyzed data.",
"title": ""
},
{
"docid": "06f99b18bae3f15e77db8ff2d8c159cc",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "5637bed8be75d7e79a2c2adb95d4c28e",
"text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.",
"title": ""
},
{
"docid": "cb693221e954efcc593b46553d7bea6f",
"text": "The increased accessibility of digitally sourced data and advance technology to analyse it drives many industries to digital change. Many global businesses are talking about the potential of big data and they believe that analysing big data sets can help businesses derive competitive insight and shape organisations’ marketing strategy decisions. Potential impact of digital technology varies widely by industry. Sectors such as financial services, insurances and mobile telecommunications which are offering virtual rather than physical products are more likely highly susceptible to digital transformation. Howeverthe interaction between digital technology and organisations is complex and there are many barriers for to effective digital change which are presented by big data. Changes brought by technology challenges both researchers and practitioners. Various global business and digital tends have highlights the emergent need for collaboration between academia and market practitioners. There are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. In this paper we identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. INTRODUCTION Advances in digital technology has made a significant impact on marketing theory and practice. Technology expands the opportunity to capture better quality customer data, increase focus on customer relationship, rise of customer insight and Customer Relationship Management (CRM). Availability of big data made traditional marketing tools to work more powerful and innovative way. In current digital age of marketing some predictions of effects of the digital changes have come to function but still there is no definite answer to what works and what doesn’t in terms of implementing the changes in an organisation context. The choice of this specific topic is motivated by the need for a better understanding for impact of digital on marketing fild.This paper will discusses the potential positive impact of the big data on digital marketing. It also present the evidence of positive views in academia and highlight the gap between academia and practices. The main focus is on understanding the gap and providing recommendation for fillingit in. The aim of this paper is to identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results presented here show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. In our discussion we shall identify these industries and present evaluations of which industry sectors would need to be looking at understanding of impact that big data may have on their practices and businesses. Digital Marketing and Big data In early 90’s when views about digital changes has started Parsons at el (1998) believed that to achieve success in digital marketing consumer marketers should create a new model with five essential elements in new media environment. Figure below shows five success factors and issues that marketers should address around it. Figure 1. Digital marketing Framework and levers Parson et al (1998) International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 147 Today in digital age of marketing some predictions of effects of this changes have come to function but still there is no define answers on what works and what doesn’t in terms of implement it in organisation context.S. Dibb (2012). There are deferent explanations, arguments and views about impact of digital on marketing strategy in the literature. At first, it is important to define what is meant by digital marketing, what are the challenges brought by it and then understand how it is adopted. Simply, Digital Marketing (2012) can be defined as “a sub branch of traditional Marketing using modern digital channels for the placement of products such as downloadable music, and primarily for communicating with stakeholders e.g. customers and investors about brand, products and business progress”. According to (Smith, 2007) the digital marketing refers “The use of digital technologies to create an integrated, targeted and measurable communication which helps to acquire and retain customers while building deeper relationships with them”. There are a number of accepted theoretical frameworks however as Parsons et al (1998) suggested potentialities offered by digital marketing need to consider carefully where and how to build in each organisation by the senior managers. The most recent developments in this area has been triggered by growing amount of digital data now known as Big Data. Tech American Foundation (2004) defines Big Data as a “term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture storage, distribution, management and analysis of information”. D. Krajicek (2013) argues that the big challenge of Big Data is the ability to focus on what is meaningful not on what is possible, with so much information at their fingerprint marketers and their research partners can and often do fall into “more is better” fallacy. Knowing something and knowing it quickly is not enough. Therefore to have valuable Big data it needs to be sorted by professional people who have skills to understand dynamics of market and can identify what is relevant and meaningful. G. Day (2011). Data should be used for achieve competitive advantage by creating effective relationship with the target segments. According to K. Kendall (2014) with de right capabilities, you can take a whole range of new data sources such as web browsing, social data and geotracking data and develop much more complete profile about your customers and then with this information you can segment better. Successful Big Data initiatives should start with a specific and clearly defined business requirement then leaders of these initiatives need to assess the technical requirement and identify gap in their capabilities and then plan the investment to close those gaps (Big Data Analytics 2014) The impact and current challenges Bileviciene (2012) suggest that well conducted market research is the basis for successful marketing and well conducted study is the basis of successful market segmentation. Generally marketing management is broken down into a series of steps, which include market research, segmentation of markets and positioning the company’s offering in such a way as to appeal to the targeted segments. (OU Business school, 2007) Market segmentation refers to the process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the targeted segment (Business dictation, 2013). The goal for segmentation is to break down the target market into different consumers groups. According to Kotler and Armstrong (2011) traditionally customers were classified based on four types of segmentation variables, geographic, demographic, psychographic and behavioural. There are many focuses, beliefs and arguments in the field of market segmentation. Many researchers believe that the traditional variables of demographic and geographic segments are out-dated and the theory regarding segmentation has become too narrow (Quinn and Dibb, 2010). According to Lin (2002), these variables should be a part of a new, expanded view of the market segmentation theory that focuses more on customer’s personalities and values. Dibb and Simkin (2009) argue that priorities of market segmentation research aim to exploring the applicability of new segmentation bases across different products and contexts, developing more flexible data analysis techniques, creating new research designs and data collection approaches, however practical questions about implementation and integration have received less attention. According to S. Dibb (2012) in academic perspective segmentation still has strategic and tactical role as shown on figure below. But in practice as Dibb argues “some things have not changed” and: Segmentation’s strategic role still matters Implementation is as much of a pain as always Even the smartest segments need embedding International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 148 Figure 2: role of segmentation S. Dibb (2012) Dilemmas with the Implementation of digital change arise for various reasons. Some academics believed that greater access to data would reduce the need for more traditional segmentation but research done on the field shows that traditional segmentation works equal to CRM ( W. Boulding et al 2005). Even thought the marketing literature offers insights for improving the effectiveness of digital changes in marketing filed there is limitation on how an organisation adapts its customer information processes once the technology is adjusted into the organisation. (J. Peltier et al 2012) suggest that there is an urgent need for data management studies that captures insights from other disciplines including organisational behaviour, change management and technology implementation. Reibstein et al (2009) also highlights the emergent need for collaboration between academia and market practitioners. They point out that there is a “digital skill gap” within the marketing filed. Authors argue that there are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. Changes brought by technology and availability of di",
"title": ""
},
{
"docid": "a5a7e3fe9d6eaf8fc25e7fd91b74219e",
"text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.",
"title": ""
},
{
"docid": "c5851a9fe60c0127a351668ba5b0f21d",
"text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.",
"title": ""
},
{
"docid": "a4933829bafd2d1e7c3ae3a9ab50c165",
"text": "Head drop is a symptom commonly seen in patients with amyotrophic lateral sclerosis. These patients usually experience neck pain and have difficulty in swallowing and breathing. Static neck braces are used in current treatment. These braces, however, immobilize the head in a single configuration, which causes muscle atrophy. This letter presents the design of a dynamic neck brace for the first time in the literature, which can both measure and potentially assist in the head motion of the human user. This letter introduces the brace design method and validates its capability to perform measurements. The brace is designed based on kinematics data collected from a healthy individual via a motion capture system. A pilot study was conducted to evaluate the wearability of the brace and the accuracy of measurements with the brace. This study recruited ten participants who performed a series of head motions. The results of this human study indicate that the brace is wearable by individuals who vary in size, the brace allows nearly $70\\%$ of the overall range of head rotations, and the sensors on the brace give accurate motion of the head with an error of under $5^{\\circ }$ when compared to a motion capture system. We believe that this neck brace can be a valid and accurate measurement tool for human head motion. This brace will be a big improvement in the available technologies to measure head motion as these are currently done in the clinic using hand-held protractors in two orthogonal planes.",
"title": ""
},
{
"docid": "7ccac1f6b753518495c44a48f4ec324a",
"text": "We propose a method to recover the shape of a 3D room from a full-view indoor panorama. Our algorithm can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments. The core part of the algorithm is a constraint graph, which includes lines and superpixels as vertices, and encodes their geometric relations as edges. A novel approach is proposed to perform 3D reconstruction based on the constraint graph by solving all the geometric constraints as constrained linear least-squares. The selected constraints used for reconstruction are identified using an occlusion detection method with a Markov random field. Experiments show that our method can recover room shapes that can not be addressed by previous approaches. Our method is also efficient, that is, the inference time for each panorama is less than 1 minute.",
"title": ""
},
{
"docid": "126b52ab2e2585eabf3345ef7fb39c51",
"text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.",
"title": ""
},
{
"docid": "d912931af094b91634e2c194e5372c1e",
"text": "Threats from social engineering can cause organisations severe damage if they are not considered and managed. In order to understand how to manage those threats, it is important to examine reasons why organisational employees fall victim to social engineering. In this paper, the objective is to understand security behaviours in practice by investigating factors that may cause an individual to comply with a request posed by a perpetrator. In order to attain this objective, we collect data through a scenario-based survey and conduct phishing experiments in three organisations. The results from the experiment reveal that the degree of target information in an attack increases the likelihood that an organisational employee fall victim to an actual attack. Further, an individual’s trust and risk behaviour significantly affects the actual behaviour during the phishing experiment. Computer experience at work, helpfulness and gender (females tend to be less susceptible to a generic attack than men), has a significant correlation with behaviour reported by respondents in the scenario-based survey. No correlation between the performance in the scenario-based survey and experiment was found. We argue that the result does not imply that one or the other method should be ruled out as they have both advantages and disadvantages which should be considered in the context of collecting data in the critical domain of information security. Discussions of the findings, implications and recommendations for future research are further provided.",
"title": ""
},
{
"docid": "f69d31b04233f59dd92127cee5321910",
"text": "The subject of this talk is Morse landscapes of natural functionals on infinitedimensional moduli spaces appearing in Riemannian geometry. First, we explain how recursion theory can be used to demonstrate that for many natural functionals on spaces of Riemannian structures, spaces of submanifolds, etc., their Morse landscapes are always more complicated than what follows from purely topological reasons. These Morse landscapes exhibit non-trivial “deep” local minima, cycles in sublevel sets that become nullhomologous only in sublevel sets corresponding to a much higher value of functional, etc. Our second topic is Morse landscapes of the length functional on loop spaces. Here the main conclusion (obtained jointly with Regina Rotman) is that these Morse landscapes can be much more complicated than what follows from topological considerations only if the length functional has “many” “deep” local minima, and the values of the length at these local minima are not “very large”. Mathematics Subject Classification (2000). Primary 53C23, 58E11, 53C20; Secondary 03D80, 68Q30, 53C40, 58E05.",
"title": ""
},
{
"docid": "ab231cbc45541b5bdbd0da82571b44ca",
"text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.",
"title": ""
},
{
"docid": "ae8f5c568b2fdbb2dbef39ac277ddb24",
"text": "Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.",
"title": ""
},
{
"docid": "f383934a6b4b5971158e001b41f1f2ac",
"text": "A survey of mental health problems of university students was carried out on 1850 participants in the age range 19-26 years. An indigenous Student Problem Checklist (SPCL) developed by Mahmood & Saleem, (2011), 45 items is a rating scale, designed to determine the prevalence rate of mental health problem among university students. This scale relates to four dimensions of mental health problems as reported by university students, such as: Sense of Being Dysfunctional, Loss of Confidence, Lack of self Regulation and Anxiety Proneness. For interpretation of the overall SPCL score, the authors suggest that scores falling above one SD should be considered as indicative of severe problems, where as score about 2 SD represent very severe problems. Our finding show that 31% of the participants fall in the “severe” category, whereas 16% fall in the “very severe” category. As far as the individual dimensions are concerned, 17% respondents comprising sample of the present study fall in very severe category Sense of Being Dysfunctional, followed by Loss of Confidence (16%), Lack of Self Regulation (14%) and Anxiety Proneness (12%). These findings are in lying with similar other studies on mental health of students. The role of variables like sample characteristics, the measure used, cultural and contextual factors are discussed in determining rates as well as their implications for student counseling service in prevention and intervention.",
"title": ""
},
{
"docid": "8439dbba880179895ab98a521b4c254f",
"text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI",
"title": ""
},
{
"docid": "3eee111e4521528031019f83786efab7",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "6573b7d885685615d99f2ef21a7fce99",
"text": "Keyword search on graph structured data has attracted a lot of attention in recent years. Graphs are a natural “lowest common denominator” representation which can combine relational, XML and HTML data. Responses to keyword queries are usually modeled as trees that connect nodes matching the keywords. In this paper we address the problem of keyword search on graphs that may be significantly larger than memory. We propose a graph representation technique that combines a condensed version of the graph (the “supernode graph”) which is always memory resident, along with whatever parts of the detailed graph are in a cache, to form a multi-granular graph representation. We propose two alternative approaches which extend existing search algorithms to exploit multigranular graphs; both approaches attempt to minimize IO by directing search towards areas of the graph that are likely to give good results. We compare our algorithms with a virtual memory approach on several real data sets. Our experimental results show significant benefits in terms of reduction in IO due to our algorithms.",
"title": ""
},
{
"docid": "a636f977eb29b870cefe040f3089de44",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "af5a8f2811ff334d742f802c6c1b7833",
"text": "Kalman filter extensions are commonly used algorithms for nonlinear state estimation in time series. The structure of the state and measurement models in the estimation problem can be exploited to reduce the computational demand of the algorithms. We review algorithms that use different forms of structure and show how they can be combined. We show also that the exploitation of the structure of the problem can lead to improved accuracy of the estimates while reducing the computational load.",
"title": ""
}
] | scidocsrr |
75e3d1b1d0e92ecb6aadbb2c86d0b0c8 | A Muddle of Models of Motivation for Using Peer-to-Peer Economy Systems | [
{
"docid": "f1c00253a57236ead67b013e7ce94a5e",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "efed0cde53938617f0b083d8db03fbab",
"text": "To investigate whether a persuasive social impact game may serve as a way to increase affective learning and attitude towards the homeless, this study examined the effects of persuasive mechanics in a video game designed to put the player in the shoes of an almost-homeless person. Data were collected from 5139 students in 200 middle/high school classes across four states. Classes were assigned to treatment groups based on matching. Two treatment conditions and a control group were employed in the study. All three groups affective learning and attitude scores decreased from the immediate posttest but the game group was significantly different from the control group in a positive direction. Students who played the persuasive social impact game sustained a significantly higher score on the Affective Learning Scale (ALS) and the Attitude Towards Homelessness Inventory (ATHI) after three weeks. Overall, findings suggest that when students play a video game that is designed using persuasive mechanics an affective and attitude change can be measured empirically. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "89dc55f20b4cfcb63d55b8b9ead8611b",
"text": "2018 How Does Batch Normalization Help Optimization? S. Santurkar*, D. Tsipras*, A. Ilyas*, & A. Mądry NIPS 2018 (Oral presentation) 2018 Adversarially Robust Generalization Requires More Data L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, & A. Mądry NIPS 2018 (Spotlight presentation) 2018 A Classification–Based Study of Covariate Shift in GAN Distributions S. Santurkar, L. Schmidt, & A. Mądry ICML 2018 2018 Generative Compression S. Santurkar, D. Budden, & N. Shavit PCS 2018 2017 Deep Tensor Convolution on Multicores D. Budden, A. Matveev, S. Santurkar, S. R. Chaudhuri, & N. Shavit ICML 2017",
"title": ""
},
{
"docid": "a37d77b5d4e3636d63396ae3fa1d0ef7",
"text": "The goal in automatic programming is to get a computer to perform a task by telling it what needs to be done, rather than by explicitly programming it. This paper considers the task of automatically generating a computer program to enable an autonomous mobile robot to perform the task of following the wall of an irregular shaped room. A human programmer has written such a program in the style of the subsumption architecture. The solution produced by genetic programming emerges as a result of Darwinian natural selection and genetic crossover (sexual recombination) in a population of computer programs. This evolutionary process is driven by a fitness measure which communicates the nature of the task to the computer.",
"title": ""
},
{
"docid": "9d45c1deaf429be2a5c33cd44b04290e",
"text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.",
"title": ""
},
{
"docid": "c72e8982a13f43d8e3debda561f3cf41",
"text": "This paper presents AOP++, a generic aspect-oriented programming framework in C++. It successfully incorporates AOP with object-oriented programming as well as generic programming naturally in the framework of standard C++. It innovatively makes use of C++ templates to express pointcut expressions and match join points at compile time. It innovatively creates a full-fledged aspect weaver by using template metaprogramming techniques to perform aspect weaving. It is notable that AOP++ itself is written completely in standard C++, and requires no language extensions. With the help of AOP++, C++ programmers can facilitate AOP with only a little effort.",
"title": ""
},
{
"docid": "ff9e0e5c2bb42955d3d29db7809414a1",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "c28b1ce1bcd5e56eb807bed4e9c167af",
"text": "In the recent years, new molecules have appeared in the illicit market, claimed to contain \"non-illegal\" compounds, although exhibiting important psychoactive effects; this heterogeneous and rapidly evolving class of compounds are commonly known as \"New Psychoactive Substances\" or, less properly, \"Smart Drugs\" and are easily distributed through the e-commerce or in the so-called \"Smart Shops\". They include, among other, synthetic cannabinoids, cathinones and tryptamine analogs of psylocin. Whereas cases of intoxication and death have been reported, the phenomenon appears to be largely underestimated and is a matter of concern for Public Health. One of the major points of concern depends on the substantial ineffectiveness of the current methods of toxicological screening of biological samples to identify the new compounds entering the market. These limitations emphasize an urgent need to increase the screening capabilities of the toxicology laboratories, and to develop rapid, versatile yet specific assays able to identify new molecules. The most recent advances in mass spectrometry technology, introducing instruments capable of detecting hundreds of compounds at nanomolar concentrations, are expected to give a fundamental contribution to broaden the diagnostic spectrum of the toxicological screening to include not only all these continuously changing molecules but also their metabolites. In the present paper a critical overview of the opportunities, strengths and limitations of some of the newest analytical approaches is provided, with a particular attention to liquid phase separation techniques coupled to high accuracy, high resolution mass spectrometry.",
"title": ""
},
{
"docid": "550e84d58db67e1d89ac437654f4ccb6",
"text": "Skin detection from images, typically used as a preprocessing step, has a wide range of applications such as dermatology diagnostics, human computer interaction designs, and etc. It is a challenging problem due to many factors such as variation in pigment melanin, uneven illumination, and differences in ethnicity geographics. Besides, age and gender introduce additional difficulties to the detection process. It is hard to determine whether a single pixel is skin or nonskin without considering the context. An efficient traditional hand-engineered skin color detection algorithm requires extensive work by domain experts. Recently, deep learning algorithms, especially convolutional neural networks (CNNs), have achieved great success in pixel-wise labeling tasks. However, CNN-based architectures are not sufficient for modeling the relationship between pixels and their neighbors. In this letter, we integrate recurrent neural networks (RNNs) layers into the fully convolutional neural networks (FCNs), and develop an end-to-end network for human skin detection. In particular, FCN layers capture generic local features, while RNN layers model the semantic contextual dependencies in images. Experimental results on the COMPAQ and ECU skin datasets validate the effectiveness of the proposed approach, where RNN layers enhance the discriminative power of skin detection in complex background situations.",
"title": ""
},
{
"docid": "05509f6b8411ea809db856f8c69b3fe1",
"text": "To explain social learning without invoking the cognitively complex concept of imitation, many learning mechanisms have been proposed. Borrowing an idea used routinely in cognitive psychology, we argue that most of these alternatives can be subsumed under a single process, priming, in which input increases the activation of stored internal representations. Imitation itself has generally been seen as a \"special faculty.\" This has diverted much research towards the all-or-none question of whether an animal can imitate, with disappointingly inconclusive results. In the great apes, however, voluntary, learned behaviour is organized hierarchically. This means that imitation can occur at various levels, of which we single out two clearly distinct ones: the \"action level,\" a rather detailed and linear specification of sequential acts, and the \"program level,\" a broader description of subroutine structure and the hierarchical layout of a behavioural \"program.\" Program level imitation is a high-level, constructive mechanism, adapted for the efficient learning of complex skills and thus not evident in the simple manipulations used to test for imitation in the laboratory. As examples, we describe the food-preparation techniques of wild mountain gorillas and the imitative behaviour of orangutans undergoing \"rehabilitation\" to the wild. Representing and manipulating relations between objects seems to be one basic building block in their hierarchical programs. There is evidence that great apes suffer from a stricter capacity limit than humans in the hierarchical depth of planning. We re-interpret some chimpanzee behaviour previously described as \"emulation\" and suggest that all great apes may be able to imitate at the program level. Action level imitation is seldom observed in great ape skill learning, and may have a largely social role, even in humans.",
"title": ""
},
{
"docid": "63b210cc5e1214c51b642e9a4a2a1fb0",
"text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.",
"title": ""
},
{
"docid": "e72382020e2b15be32047da611ad078f",
"text": "This article describes the results of a case study that applies Neural Networkbased Optical Character Recognition (OCR) to scanned images of books printed between 1487 and 1870 by training the OCR engine OCRopus (Breuel et al. 2013) on the RIDGES herbal text corpus (Odebrecht et al. 2017, in press). Training specific OCR models was possible because the necessary ground truth is available as error-corrected diplomatic transcriptions. The OCR results have been evaluated for accuracy against the ground truth of unseen test sets. Character and word accuracies (percentage of correctly recognized items) for the resulting machine-readable texts of individual documents range from 94% to more than 99% (character level) and from 76% to 97% (word level). This includes the earliest printed books, which were thought to be inaccessible by OCR methods until recently. Furthermore, OCR models trained on one part of the corpus consisting of books with different printing dates and different typesets (mixed models) have been tested for their predictive power on the books from the other part containing yet other fonts, mostly yielding character accuracies well above 90%. It therefore seems possible to construct generalized models trained on a range of fonts that can be applied to a wide variety of historical printings still giving good results. A moderate postcorrection effort of some pages will then enable the training of individual models with even better accuracies. Using this method, diachronic corpora including early printings can be constructed much faster and cheaper than by manual transcription. The OCR methods reported here open up the possibility of transforming our printed textual cultural 1 ar X iv :1 60 8. 02 15 3v 2 [ cs .C L ] 1 F eb 2 01 7 Springmann & Lüdeling OCR of historical printings heritage into electronic text by largely automatic means, which is a prerequisite for the mass conversion of scanned books.",
"title": ""
},
{
"docid": "38b93f50d4fc5a1029ebedb5a544987a",
"text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.",
"title": ""
},
{
"docid": "ba50550de9920eb3c40da0550663dd32",
"text": "Bile acids are important signaling molecules that regulate cholesterol, glucose, and energy homoeostasis and have thus been implicated in the development of metabolic disorders. Their bioavailability is strongly modulated by the gut microbiota, which contributes to generation of complex individual-specific bile acid profiles. Hence, it is important to have accurate methods at hand for precise measurement of these important metabolites. Here, a rapid and sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for simultaneous identification and quantitation of primary and secondary bile acids as well as their taurine and glycine conjugates was developed and validated. Applicability of the method was demonstrated for mammalian tissues, biofluids, and cell culture media. The analytical approach mainly consists of a simple and rapid liquid-liquid extraction procedure in presence of deuterium-labeled internal standards. Baseline separation of all isobaric bile acid species was achieved and a linear correlation over a broad concentration range was observed. The method showed acceptable accuracy and precision on intra-day (1.42-11.07 %) and inter-day (2.11-12.71 %) analyses and achieved good recovery rates for representative analytes (83.7-107.1 %). As a proof of concept, the analytical method was applied to mouse tissues and biofluids, but especially to samples from in vitro fermentations with gut bacteria of the family Coriobacteriaceae. The developed method revealed that the species Eggerthella lenta and Collinsella aerofaciens possess bile salt hydrolase activity, and for the first time that the species Enterorhabdus mucosicola is able to deconjugate and dehydrogenate primary bile acids in vitro.",
"title": ""
},
{
"docid": "565f815ef0c1dd5107f053ad39dade20",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "96e10f0858818ce150dba83882557aee",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.",
"title": ""
},
{
"docid": "9e84d41477de0aaf6224ccf89e77fa4c",
"text": "A switching control strategy to extend the zero-voltage-switching (ZVS) operating range of a Dual Active Bridge (DAB) AC/DC converter to the entire input-voltage interval and the full power range is proposed. The converter topology consists of a DAB DC/DC converter, receiving a rectified AC line voltage via a synchronous rectifier. The DAB comprises a primary side half bridge and secondary side full bridge, linked by a high-frequency isolation transformer and inductor. Using conventional control strategies, the soft-switching boundary conditions are exceeded at the higher voltage conversion ratios of the AC input interval. A novel pulse-width-modulation strategy to fully eliminate these boundaries and its analysis are presented in this paper, allowing increased performance (in terms of efficiency and stresses). Additionally, by using a half bridge / full bridge configuration, the number of active components is reduced. A prototype converter was constructed and experimental results are given to validate the theoretical analyses and practical feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "756d1fbb1729767429d1e445626b2351",
"text": "Sir, An unusual abnormal fat distribution of the lower part of the body is characterized by massive and symmetric deposits in the groins, trochanters, buttocks, and hips, which contrast sharply with the normal upper part of the body. The massive lipomatosis of the lower part of the body can be classified into three types: type 1, the familial symmetrical lipomatosis that affects the groins, trochanters, hips, buttocks, and thighs; type 2, the bilateral peritrochanteric familial lipomatosis; and type 3, the unilateral peritrochanteric lipomatosis. This deformity affects only women aged between 18 and 50 in the Mediterranean region [1]. Further, isolated abnormal bilateral peritrochanteric lipomatosis has rarely been reported in literature. We report two patients, a mother and her daughter, with isolated bilateral peritrochanteric lipomatosis, who had normal fat distribution of the upper half of the body which was in contrast with the abnormal lower half. The mother, a 42-year-old patient, presented with bilateral abnormal fat distribution of the lower part of the body. Peritrochanteric fat deposits had appeared at the age of 13 and increased with time. The physical examination revealed bilateral isolated, well-demarcated peritrochanteric lipomatosis and normal fat distribution of the upper half of the body (Fig. 1a). The patient was 167 cm tall and weighed 72 kg (body mass index [BMI]=25.8 kg/m). Laboratory and endocrinologic tests included the serum concentrations of lipoprotein, lipoprotein lipase activity, cholesterol, triglycerides, uric acid, fasting glucose, serum estradiol, and testosterone levels, and thyroid function parameters were within normal limits. Histological study of lipoaspirate showed subcutaneous fatty tissue. The daugther, a 22-year-old patient, also presented with bilateral abnormal fat distribution of the lower part of the body. The patient's signs had appeared at age of 12 also increasing with time. The physical examination revealed bilateral isolated, well-demarcated peritrochanteric lipomatosis although it was more evident on the left side (Fig. 2a). The patient was 169 cm tall and weighed 67 kg (BMI=23.5 kg/m). Laboratory and endocrinological tests were within normal limits. Histological study of lipoaspirate showed subcutaneous fatty tissue. Both patients underwent general anesthesia and all procedures were initiated with infusion of tumescent solution (1 L normal saline solution, 30 mg lidocaine, and 1 mL of 1:1,000 epinephrine) [2]. A suction-assisted liposuction method was employed using 4and 6-mm cannulae. Suction started deep into the superficial fascia and ended with superficial liposuction [3]. Incisionswere closedwith6-0 polyprolene and dressings were applied. A second limited liposuction was planned to treat the irregularities in the first case. Results were satisfactory in both cases (Figs. 1b and 2b). Isolated abnormal bilateral peritrochanteric lipomatosis has rarely been reported in literature. In 2006, Goshtasby et al. presented a case of isolated bilateral peritrochanteric lipomatosis of the soft tissue overlying the trochanters [4]. The unusual distribution of fat in the lower body should be differentiated from the familial multiple nodular symmetrical lipomatosis, where the lipomas are nodular, circumscribed, subcutaneous in location, and more common on the extremities and trunk rather than around the neck, shoulder, or the upper torso [5]. Stavropoulos and his colleagues have suggested that the term symmetric lipomatosis referred to two separate disorders, benign multiple symmetric lipomatosis and female S. Şentürk (*) Department of Plastic and Reconstructive Surgery, Mevlana (Rumi) University Hospital, Konya, Turkey e-mail: [email protected]",
"title": ""
},
{
"docid": "cc2579bb621338908cacc7808cb1f851",
"text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.",
"title": ""
},
{
"docid": "ffd200984bf3a8e80a5ff55dc4ad10f6",
"text": "We propose a high-capacity polymer-based optical and electrical LSI package integrated with multimode Si photonic transmitters and receivers. We describe the fabrication and characteristics of the polymer-based hybrid LSI package substrate with a polymer optical waveguide, a mirror, and optical card edge connectors. We fabricated optical mirrors with several angles ranging from 40° to 45° for the Si photonic grating coupler by using a dicing blade at an angle. The dicing mirror changed the emission angle for the grating coupler. We also realized a large lateral misalignment tolerance (±11.5 μm) between the polymer waveguide and MMF for 1 dB of excess loss at 24 channels. We obtained 1-dB coupling loss using an optical card edge connector at 1.3 μm because of the large tolerance. We realized 25-Gb/s error-free transmission per channel at 1.3 μm. We also describe here the error penalty and jitter due to modal noise generated by coupling mismatch.",
"title": ""
}
] | scidocsrr |
2935b6b8d7aefe2b8dee8cc094619e7a | Belief & Evidence in Empirical Software Engineering | [
{
"docid": "dc66c80a5031c203c41c7b2908c941a3",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
},
{
"docid": "0834473b45a9b009da458a8d5009cfa0",
"text": "Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project's code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected changesets do contain code significantly less similar to the project than accepted ones; furthermore, the less similar changesets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project's code style.",
"title": ""
}
] | [
{
"docid": "4a05d2c333463da04cf03b2f387cb8b8",
"text": "The increasing utilization of business process models both in business analysis and information systems development raises several issues regarding quality measures. In this context, this paper discusses understandability as a particular quality aspect and its connection with personal, model, and content related factors. We use an online survey to explore the ability of the model reader to draw correct conclusions from a set of process models. For the first group of the participants we used models with abstract activity labels (e.g. A, B, C) while the second group received the same models with illustrative labels such as “check credit limit”. The results suggest that all three categories indeed have an impact on the understandability.",
"title": ""
},
{
"docid": "99f1bbd3eeda4aee35a96d684de81511",
"text": "Perimeter protection aims at identifying intrusions across the temporary base established by army in critical regions. Convex-hull algorithm is used to determine the boundary nodes among a set of nodes in the network. To study the effectiveness of such algorithm, we opted three variations, such as distributed approach, centralized, and mobile approach, suitable for wireless sensor networks for boundary detection. The convex-hull approaches are simulated with different node density, and the performance is measured in terms of energy consumption, boundary detection time, and accuracy. Results from the simulations highlight that the convex-hull approach is effective under densely deployed nodes in an environment. The different approaches of convex-hull algorithm are found to be suitable under different sensor network application scenarios.",
"title": ""
},
{
"docid": "4ea5dd9377b2ed6dba15ee05060f1c53",
"text": "The mechanism of death in patients struggling against restraints remains a topic of debate. This article presents a series of five patients with restraint-associated cardiac arrest and profound metabolic acidosis. The lowest recorded pH was 6.25; this patient and three others died despite aggressive resuscitation. The survivor's pH was 6.46; this patient subsequently made a good recovery. Struggling against restraints may produce a lactic acidosis. Stimulant drugs such as cocaine may promote further metabolic acidosis and impair normal behavioral regulatory responses. Restrictive positioning of combative patients may impede appropriate respiratory compensation for this acidemia. Public safety personnel and emergency providers must be aware of the life threat to combative patients and be careful with restraint techniques. Further investigation of sedative agents and buffering therapy for this select patient group is suggested.",
"title": ""
},
{
"docid": "36286c36dfd7451ecd297e2ebe445a35",
"text": "Research on the \"dark side\" of organizational behavior has determined that employee sabotage is most often a reaction by disgruntled employees to perceived mistreatment. To date, however, most studies on employee retaliation have focused on intra-organizational sources of (in)justice. Results from this field study of customer service representatives (N = 358) showed that interpersonal injustice from customers relates positively to customer-directed sabotage over and above intra-organizational sources of fairness. Moreover, the association between unjust treatment and sabotage was moderated by 2 dimensions of moral identity (symbolization and internalization) in the form of a 3-way interaction. The relationship between injustice and sabotage was more pronounced for employees high (vs. low) in symbolization, but this moderation effect was weaker among employees who were high (vs. low) in internalization. Last, employee sabotage was negatively related to job performance ratings.",
"title": ""
},
{
"docid": "588b20ca8f7fc3a41002b281b67f75c4",
"text": "Retargeting is an innovative online marketing technique in the modern age. Although this advertising form offers great opportunities of bringing back customers who have left an online store without a complete purchase, retargeting is risky because the necessary data collection leads to strong privacy concerns which in turn, trigger consumer reactance and decreasing trust. Digital nudges – small design modifications in digital choice environments which guide peoples’ behaviour – present a promising concept to bypass these negative consequences of retargeting. In order to prove the positive effects of digital nudges, we aim to conduct an online experiment with a subsequent survey by testing the impacts of social nudges and information nudges in retargeting banners. Our expected contribution to theory includes an extension of existing research of nudging in context of retargeting by investigating the effects of different nudges in retargeting banners on consumers’ behaviour. In addition, we aim to provide practical contributions by the provision of design guidelines for practitioners to build more trustworthy IT artefacts and enhance retargeting strategy of marketing practitioners.",
"title": ""
},
{
"docid": "c7d54d4932792f9f1f4e08361716050f",
"text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.",
"title": ""
},
{
"docid": "80d8a8c09e9918981d1a93e5bccf45ba",
"text": "In this paper, we study a multi-residential electricity load scheduling problem with multi-class appliances in smart grid. Compared with the previous works in which only limited types of appliances are considered or only single residence grids are considered, we model the grid system more practically with jointly considering multi-residence and multi-class appliance. We formulate an optimization problem to maximize the sum of the overall satisfaction levels of residences which is defined as the sum of utilities of the residential customers minus the total cost for energy consumption. Then, we provide an electricity load scheduling algorithm by using a PL-Generalized Benders Algorithm which operates in a distributed manner while protecting the private information of the residences. By applying the algorithm, we can obtain the near-optimal load scheduling for each residence, which is shown to be very close to the optimal scheduling, and also obtain the lower and upper bounds on the optimal sum of the overall satisfaction levels of all residences, which are shown to be very tight.",
"title": ""
},
{
"docid": "3c6a72b7af179dba12558475d0c1ab1a",
"text": "Current GUI builders provide a design environment for user interfaces that target either a single type or fixed set of devices, and provide little support for scenarios in which the user interface, or parts of it, are distributed over multiple devices. Distributed user interfaces have received increasing attention over the past years. There are different, often model-based, approaches that focus on technical issues. This paper presents XDStudio--a new GUI builder designed to support interactive development of cross-device web interfaces. XDStudio implements two complementary authoring modes with a focus on the design process of distributed user interfaces. First, simulated authoring allows designing for a multi-device environment on a single device by simulating other target devices. Second, on-device authoring allows the design process itself to be distributed over multiple devices, as design and development take place on the target devices themselves. To support interactive development for multi-device environments, where not all devices may be present at design and run-time, XDStudio supports switching between the two authoring modes, as well as between design and use modes, as required. This paper focuses on the design of XDStudio, and evaluates its support for two distribution scenarios.",
"title": ""
},
{
"docid": "801f78236dcd75d0ea577e1f26744e13",
"text": "We present a study on the importance of psycho-acoustic transformations for effective audio feature calculation. From the results, both crucial and problematic parts of the algorithm for Rhythm Patterns feature extraction are identified. We furthermore introduce two new feature representations in this context: Statistical Spectrum Descriptors and Rhythm Histogram features. Evaluation on both the individual and combined feature sets is accomplished through a music genre classification task, involving 3 reference audio collections. Results are compared to published measures on the same data sets. Experiments confirmed that in all settings the inclusion of psycho-acoustic transformations provides significant improvement of classification accuracy.",
"title": ""
},
{
"docid": "38b1a88b57d2834129a59ac235d6b414",
"text": "Historically, social scientists have sought out explanations of human and social phenomena that provide interpretable causal mechanisms, while often ignoring their predictive accuracy. We argue that the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction; however, it has also highlighted three important issues that require resolution. First, current practices for evaluating predictions must be better standardized. Second, theoretical limits to predictive accuracy in complex social systems must be better characterized, thereby setting expectations for what can be predicted or explained. Third, predictive accuracy and interpretability must be recognized as complements, not substitutes, when evaluating explanations. Resolving these three issues will lead to better, more replicable, and more useful social science.",
"title": ""
},
{
"docid": "b898d7a2da7a10ef756317bc7f44f37c",
"text": "Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.",
"title": ""
},
{
"docid": "27700d9b7ee0cc84b0f82c1c51c67c23",
"text": "In automated driving systems (ADS) and advanced driver-assistance systems (ADAS), an efficient road segmentation module is required to present the drivable region and to build an occupancy grid for path planning components. The existing road algorithms build gigantic convolutional neural networks (CNNs) that are computationally expensive and time consuming. In this paper, we explore the usage of recurrent neural network (RNN) in image processing and propose an efficient network layer named spatial sequence. This layer is then applied to our new road segmentation network RoadNet-v2, which combines convolutional layers and spatial sequence layers. In the end, the network is trained and tested in KITTI road benchmark and Cityscapes dataset. We claim the proposed network achieves comparable accuracy to the existing road segmentation algorithms but much faster processing speed, 10 ms per frame.",
"title": ""
},
{
"docid": "64e37bb3cada08bd2b56b5fa806c4d07",
"text": "Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with Ω (N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N → ∞ datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d0 = Ω̃ (√ N ) , and a more realistic number of d1 = Ω̃ (N/d0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d0 ≈ 16 hidden neurons.",
"title": ""
},
{
"docid": "b6a0fcd9ee49b3dbfccdfa88fd0f07a0",
"text": "Generating images from natural language is one of the primary applications of recent conditional generative models. Besides testing our ability to model conditional, highly dimensional distributions, text to image synthesis has many exciting and practical applications such as photo editing or computer-aided content creation. Recent progress has been made using Generative Adversarial Networks (GANs). This material starts with a gentle introduction to these topics and discusses the existent state of the art models. Moreover, I propose Wasserstein GAN-CLS, a new model for conditional image generation based on the Wasserstein distance which offers guarantees of stability. Then, I show how the novel loss function of Wasserstein GAN-CLS can be used in a Conditional Progressive Growing GAN. In combination with the proposed loss, the model boosts by 7.07% the best Inception Score (on the Caltech birds dataset) of the models which use only the sentence-level visual semantics. The only model which performs better than the Conditional Wasserstein Progressive growing GAN is the recently proposed AttnGAN which uses word-level visual semantics as well.",
"title": ""
},
{
"docid": "c2aa986c09f81c6ab54b0ac117d03afb",
"text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "ce636f568fc8c07b5a44190ae171c043",
"text": "Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing—and contesting—interpretations via different forms of argument. How does the “Web 2.0” paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization.",
"title": ""
},
{
"docid": "d29cca7c16b0e5b43c85e1a8701d735f",
"text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.",
"title": ""
},
{
"docid": "60eeb0468dff5a3eeb9c9d133a81759f",
"text": "To evaluate cone and cone-driven retinal function in patients with Smith-Lemli-Opitz syndrome (SLOS), a condition characterized by low cholesterol. Rod and rod-driven function in patients with SLOS are known to be abnormal. Electroretinographic (ERG) responses to full-field stimuli presented on a steady, rod suppressing background were recorded in 13 patients who had received long-term cholesterol supplementation. Cone photoresponse sensitivity (S CONE) and saturated amplitude (R CONE) parameters were estimated using a model of the activation of phototransduction, and post-receptor b-wave and 30 Hz flicker responses were analyzed. The responses of the patients were compared to those of control subjects (N = 13). Although average values of both S CONE and R CONE were lower than in controls, the differences were not statistically significant. Post-receptor b-wave amplitude and implicit time and flicker responses were normal. The normal cone function contrasts with the significant abnormalities in rod function that were found previously in these same patients. Possibly, cholesterol supplementation has a greater protective effect on cones than on rods as has been demonstrated in the rat model of SLOS.",
"title": ""
},
{
"docid": "93885ca422d34d34c271585ed4ee4a7e",
"text": "Ambient assisted living (AAL) technologies can help the elderly maintain their independence while keeping them safer. Sensors monitor their activities to detect situations in which they might need help. Most research in this area has targeted indoor environments, but outdoor activities are just as important; many risky situations might occur outdoors. SafeNeighborhood (SN) is an AAL system that combines data from multiple sources with collective intelligence to tune sensor data. It merges mobile, ambient, and AI technologies with old-fashioned neighborhood ties to create safe outdoor spaces. The initial results indicate SN’s potential use and point toward new opportunities for care of the elderly.",
"title": ""
},
{
"docid": "4da065092faed2284dc5fe073832fb96",
"text": "An approach to the problem of autonomous mobile robot obstacle avoidance using reinforcement learning neural network is proposed in this paper. Q-learning is one kind of reinforcement learning method that is similar to dynamic programming and the neural network has a powerful ability to store the values. We integrate these two methods with the aim to ensure autonomous robot behavior in complicated unpredictable environment. The simulation results show that the simulated robot using the reinforcement learning neural network can enhance its learning ability obviously and can finish the given task in a complex environment.",
"title": ""
}
] | scidocsrr |
662497218440e16157a3f40ceeddf58a | Answering Science Exam Questions Using Query Rewriting with Background Knowledge | [
{
"docid": "e27d560bd974985dec1df3791fdf2f13",
"text": "Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.",
"title": ""
},
{
"docid": "540099388527a2e8dd5b43162b697fea",
"text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "fe3a3ffab9a98cf8f4f71c666383780c",
"text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.",
"title": ""
},
{
"docid": "fa6f272026605bddf1b18c8f8234dba6",
"text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles",
"title": ""
},
{
"docid": "6d9393c95ca9c6534c98c0d0a4451fbc",
"text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.",
"title": ""
}
] | [
{
"docid": "a4e1a0f5e56685a294a2c9088809a4fb",
"text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.",
"title": ""
},
{
"docid": "38a74fff83d3784c892230255943ee23",
"text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.",
"title": ""
},
{
"docid": "d1444f26cee6036f1c2df67a23d753be",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "26f957036ead7173f93ec16a57097a50",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "3b2c18828ef155233ede7f51d80f656a",
"text": "It is crucial for cancer diagnosis and treatment to accurately identify the site of origin of a tumor. With the emergence and rapid advancement of DNA microarray technologies, constructing gene expression profiles for different cancer types has already become a promising means for cancer classification. In addition to research on binary classification such as normal versus tumor samples, which attracts numerous efforts from a variety of disciplines, the discrimination of multiple tumor types is also important. Meanwhile, the selection of genes which are relevant to a certain cancer type not only improves the performance of the classifiers, but also provides molecular insights for treatment and drug development. Here, we use semisupervised ellipsoid ARTMAP (ssEAM) for multiclass cancer discrimination and particle swarm optimization for informative gene selection. ssEAM is a neural network architecture rooted in adaptive resonance theory and suitable for classification tasks. ssEAM features fast, stable, and finite learning and creates hyperellipsoidal clusters, inducing complex nonlinear decision boundaries. PSO is an evolutionary algorithm-based technique for global optimization. A discrete binary version of PSO is employed to indicate whether genes are chosen or not. The effectiveness of ssEAM/PSO for multiclass cancer diagnosis is demonstrated by testing it on three publicly available multiple-class cancer data sets. ssEAM/PSO achieves competitive performance on all these data sets, with results comparable to or better than those obtained by other classifiers",
"title": ""
},
{
"docid": "b52bad9f04c8a922b7012603be56c819",
"text": "In this paper, we investigate the possibility that a Near Field Communication (NFC) enabled mobile phone, with an embedded secure element (SE), could be used as a mobile token cloning and skimming platform. We show how an attacker could use an NFC mobile phone as such an attack platform by exploiting the existing security controls of the embedded SE and the available contactless APIs. To illustrate the feasibility of these actions, we also show how to practically skim and emulate certain tokens typically used in payment and access control applications with a NFC mobile phone. We also discuss how to capture and analyse legitimate transaction information from contactless systems. Although such attacks can also be implemented on other contactless platforms, such as custom-built card emulators and modified readers, the NFC enabled mobile phone has a legitimate form factor, which would be accepted by merchants and arouse less suspicion in public. Finally, we propose several security countermeasures for NFC phones that could prevent such misuse.",
"title": ""
},
{
"docid": "d98b97dae367d57baae6b0211c781d66",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "11707c7f7c5b028392b25d1dffa9daeb",
"text": "High reliability and large rangeability are required of pumps in existing and new plants which must be capable of reliable on-off cycling operations and specially low load duties. The reliability and rangeability target is a new task for the pump designer/researcher and is made very challenging by the cavitation and/or suction recirculation effects, first of all the pump damage. The present knowledge about the: a) design critical parameters and their optimization, b) field problems diagnosis and troubleshooting has much advanced, in the very latest years. The objective of the pump manufacturer is to develop design solutions and troubleshooting approaches which improve the impeller life as related to cavitation erosion and enlarge the reliable operating range by minimizing the effects of the suction recirculation. This paper gives a short description of several field cases characterized by different damage patterns and other symptoms related with cavitation and/or suction recirculation. The troubleshooting methodology is described in detail, also focusing on the role of both the pump designer and the pump user.",
"title": ""
},
{
"docid": "9852e00f24fd8f626a018df99bea5f1f",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "d2d134363fc993d68194e770c338b301",
"text": "The demand for coal has been on the rise in modern society. With the number of opencast coal mines decreasing, it has become increasingly difficult to find coal. Low efficiencies and high casualty rates have always been problems in the process of coal exploration due to complicated geological structures in coal mining areas. Therefore, we propose a new exploration technology for coal that uses satellite images to explore and monitor opencast coal mining areas. First, we collected bituminous coal and lignite from the Shenhua opencast coal mine in China in addition to non-coal objects, including sandstones, soils, shales, marls, vegetation, coal gangues, water, and buildings. Second, we measured the spectral data of these objects through a spectrometer. Third, we proposed a multilayer extreme learning machine algorithm and constructed a coal classification model based on that algorithm and the spectral data. The model can assist in the classification of bituminous coal, lignite, and non-coal objects. Fourth, we collected Landsat 8 satellite images for the coal mining areas. We divided the image of the coal mine using the constructed model and correctly described the distributions of bituminous coal and lignite. Compared with the traditional coal exploration method, our method manifested an unparalleled advantage and application value in terms of its economy, speed, and accuracy.",
"title": ""
},
{
"docid": "6ee2d94f0ccebbb05df2ea4b79b30976",
"text": "Received: 25 June 2013 Revised: 11 October 2013 Accepted: 25 November 2013 Abstract This paper distinguishes and contrasts two design science research strategies in information systems. In the first strategy, a researcher constructs or builds an IT meta-artefact as a general solution concept to address a class of problem. In the second strategy, a researcher attempts to solve a client’s specific problem by building a concrete IT artefact in that specific context and distils from that experience prescriptive knowledge to be packaged into a general solution concept to address a class of problem. The two strategies are contrasted along 16 dimensions representing the context, outcomes, process and resource requirements. European Journal of Information Systems (2015) 24(1), 107–115. doi:10.1057/ejis.2013.35; published online 7 January 2014",
"title": ""
},
{
"docid": "819693b9acce3dfbb74694733ab4d10f",
"text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.",
"title": ""
},
{
"docid": "f5e4bf1536d2ef7065b77be4e0c37ddc",
"text": "This research addresses management control in the front end of innovation projects. We conceptualize and analyze PMOs more broadly than just as a specialized project-focused organizational unit. Building on theories of management control, organization design, and innovation front end literature, we assess the role of PMO as an integrative arrangement. The empirical material is derived from four companies. The results show a variety of management control mechanisms that can be considered as integrative organizational arrangements. Such organizational arrangements can be considered as an alternative to a non-existent PMO, or to complement a (non-existent) PMO's tasks. The paper also contrasts prior literature by emphasizing the desirability of a highly organic or embedded matrix structure in the organization. Finally, we propose that the development path of the management approach proceeds by first emphasizing diagnostic and boundary systems (with mechanistic management approaches) followed by intensive use of interactive and belief systems (with value-based management approaches). The major contribution of this paper is in the organizational and managerial mechanisms of a firm that is managing multiple innovation projects. This research also expands upon the existing PMO research to include a broader management control approach for managing projects in companies. © 2011 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "eccd1b3b8acbf8426d7ccb7933e0bd0e",
"text": "We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.",
"title": ""
},
{
"docid": "ecb2cb8de437648c7895fc3f93809bfb",
"text": "Context: Static analysis approaches have been proposed to assess the security of Android apps, by searching for known vulnerabilities or actual malicious code. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analyzers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyze Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review which involves studying around 90 research papers published in software engineering, programming languages and security venues. This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination have led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artifacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers.",
"title": ""
},
{
"docid": "e9d5ba66ddcc3a38020f532414ebeef7",
"text": "Current theories of aspect acknowledge the pervasiveness of verbs of variable telicity, and are designed to account both for why these verbs show such variability and for the complex conditions that give rise to telic and atelic interpretations. Previous work has identified several sets of such verbs, including incremental theme verbs, such as eat and destroy; degree achievements, such as cool and widen; and (a)telic directed motion verbs, such as ascend and descend (see e.g., Dowty 1979; Declerck 1979; Dowty 1991; Krifka 1989, 1992; Tenny 1994; Bertinetto and Squartini 1995; Levin and Rappaport Hovav 1995; Jackendoff 1996; Ramchand 1997; Filip 1999; Hay, Kennedy, and Levin 1999; Rothstein 2003; Borer 2005). As the diversity in descriptive labels suggests, most previous work has taken these classes to embody distinct phenomena and to have distinct lexical semantic analyses. We believe that it is possible to provide a unified analysis in which the behavior of all of these verbs stems from a single shared element of their meanings: a function that measures the degree to which an object changes relative to some scalar dimension over the course of an event. We claim that such ‘measures of change’ are based on the more general kinds of measure functions that are lexicalized in many languages by gradable adjectives, and that map an object to a scalar value that represents the degree to which it manifests some gradable property at a time (see Bartsch and Vennemann 1972,",
"title": ""
},
{
"docid": "1258939378850f7d89f6fa860be27c39",
"text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.",
"title": ""
},
{
"docid": "ffa25551d331651d80f8d91f59a441c0",
"text": "Since vulnerabilities in Linux kernel are on the increase, attackers have turned their interests into related exploitation techniques. However, compared with numerous researches on exploiting use-after-free vulnerabilities in the user applications, few efforts studied how to exploit use-after-free vulnerabilities in Linux kernel due to the difficulties that mainly come from the uncertainty of the kernel memory layout. Without specific information leakage, attackers could only conduct a blind memory overwriting strategy trying to corrupt the critical part of the kernel, for which the success rate is negligible.\n In this work, we present a novel memory collision strategy to exploit the use-after-free vulnerabilities in Linux kernel reliably. The insight of our exploit strategy is that a probabilistic memory collision can be constructed according to the widely deployed kernel memory reuse mechanisms, which significantly increases the success rate of the attack. Based on this insight, we present two practical memory collision attacks: An object-based attack that leverages the memory recycling mechanism of the kernel allocator to achieve freed vulnerable object covering, and a physmap-based attack that takes advantage of the overlap between the physmap and the SLAB caches to achieve a more flexible memory manipulation. Our proposed attacks are universal for various Linux kernels of different architectures and could successfully exploit systems with use-after-free vulnerabilities in kernel. Particularly, we achieve privilege escalation on various popular Android devices (kernel version>=4.3) including those with 64-bit processors by exploiting the CVE-2015-3636 use-after-free vulnerability in Linux kernel. To our knowledge, this is the first generic kernel exploit for the latest version of Android. Finally, to defend this kind of memory collision, we propose two corresponding mitigation schemes.",
"title": ""
},
{
"docid": "01984e20b6fa46888fc82dccc621ab73",
"text": "Organizations spend a significant amount of resources securing their servers and network perimeters. However, these mechanisms are not sufficient for protecting databases. In this paper, we present a new technique for identifying malicious database transactions. Compared to many existing approaches which profile SQL query structures and database user activities to detect intrusions, the novelty of this approach is the automatic discovery and use of essential data dependencies, namely, multi-dimensional and multi-level data dependencies, for identifying anomalous database transactions. Since essential data dependencies reflect semantic relationships among data items and are less likely to change than SQL query structures or database user behaviors, they are ideal for profiling data correlations for identifying malicious database activities.1",
"title": ""
}
] | scidocsrr |
2379575cd8f94486a085e9a1bf85a0a4 | Multi- and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception | [
{
"docid": "6d15f9766e35b2c78ce5402ed44cdf57",
"text": "Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings. Our architecture outperforms previous approaches in combining input from distinct modalities, and propagates perceptual information on concrete concepts to abstract concepts more effectively than alternatives. We discuss the implications of our results both for optimizing the performance of multi-modal models and for theories of abstract conceptual representation.",
"title": ""
}
] | [
{
"docid": "b57377a695ce7c5114d61bbe4f29e7a1",
"text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.",
"title": ""
},
{
"docid": "bf2c7b1d93b6dee024336506fb5a2b32",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "783d7251658f9077e05a7b1b9bd60835",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "16995051681cebf1e2dba1484a3f85bf",
"text": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms—those that yield the correct denotation—from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.",
"title": ""
},
{
"docid": "8201ba18da15b1acb1e399e99d1fc586",
"text": "Articles in the financial press suggest that institutional investors are overly focused on short-term profitability leading mangers to manipulate earnings fearing that a short-term profit disappointment will lead institutions to liquidate their holdings. This paper shows, however, that the absolute value of discretionary accruals declines with institutional ownership. The result is consistent with managers recognizing that institutional owners are better informed than individual investors, which reduces the perceived benefit of managing accruals. We also find that as institutional ownership increases, stock prices tend to reflect a greater proportion of the information in future earnings relative to current earnings. This result is consistent with institutional investors looking beyond current earnings compared to individual investors. Collectively, the results offer strong evidence that managers do not manipulate earnings due to pressure from institutional investors who are overly focused on short-term profitability.",
"title": ""
},
{
"docid": "2ebb00579fbfbadb07331bd297e658e9",
"text": "There is risk involved in any construction project. A contractor’s quality assurance system is essential in preventing problems and the reoccurrence of problems. This system ensures consistent quality for the contractor’s clients. An evaluation of the quality systems of 15 construction contractors in Saudi Arabia is discussed here. The evaluation was performed against the ISO 9000 standard. The contractors’ quality systems vary in complexity, ranging from an informal inspection and test system to a comprehensive system. The ISO 9000 clauses most often complied with are those dealing with (1) inspection and test status; (2) inspection and testing; (3) control of nonconformance product; and (4) handling, storage, and preservation. The clauses least complied with concern (1) design control; (2) internal auditing; (3) training; and (4) statistical techniques. Documentation of a quality system is scarce for the majority of the contractors.",
"title": ""
},
{
"docid": "2937b605179b3a0f7657f7ddf5dbcf1a",
"text": "This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis.",
"title": ""
},
{
"docid": "ef15ffc5609653488c68364d2ba77149",
"text": "BACKGROUND\nBeneficial effects of probiotics have never been analyzed in an animal shelter.\n\n\nHYPOTHESIS\nDogs and cats housed in an animal shelter and administered a probiotic are less likely to have diarrhea of ≥2 days duration than untreated controls.\n\n\nANIMALS\nTwo hundred and seventeen cats and 182 dogs.\n\n\nMETHODS\nDouble blinded and placebo controlled. Shelter dogs and cats were housed in 2 separate rooms for each species. For 4 weeks, animals in 1 room for each species was fed Enterococcus faecium SF68 while animals in the other room were fed a placebo. After a 1-week washout period, the treatments by room were switched and the study continued an additional 4 weeks. A standardized fecal score system was applied to feces from each animal every day by a blinded individual. Feces of animals with and without diarrhea were evaluated for enteric parasites. Data were analyzed by a generalized linear mixed model using a binomial distribution with treatment being a fixed effect and the room being a random effect.\n\n\nRESULTS\nThe percentage of cats with diarrhea ≥2 days was significantly lower (P = .0297) in the probiotic group (7.4%) when compared with the placebo group (20.7%). Statistical differences between groups of dogs were not detected but diarrhea was uncommon in both groups of dogs during the study.\n\n\nCONCLUSION AND CLINICAL IMPORTANCE\nCats fed SF68 had fewer episodes of diarrhea of ≥2 days when compared with controls suggests the probiotic may have beneficial effects on the gastrointestinal tract.",
"title": ""
},
{
"docid": "bb86cae865113f2907a4cecb5f89453f",
"text": "In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete. This includes, for example, (i) semi-supervised learning where labels are partially known; (ii) multi-instance learning where labels are implicitly known; and (iii) clustering where labels are completely unknown. Unlike supervised learning, learning with weak labels involves a difficult Mixed-Integer Programming (MIP) problem. Therefore, it can suffer from poor scalability and may also get stuck in local minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel label generation strategy. This leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM subproblems that are much more scalable than previous convex SDP relaxations. Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised learning; (ii) multi-instance learning for locating regions of interest in content-based information retrieval; and (iii) clustering, clearly demonstrate improved performance, and WellSVM is also readily applicable on large data sets.",
"title": ""
},
{
"docid": "df997cfc15654a0c9886d52c4166f649",
"text": "Network embedding aims to represent each node in a network as a low-dimensional feature vector that summarizes the given node’s (extended) network neighborhood. The nodes’ feature vectors can then be used in various downstream machine learning tasks. Recently, many embedding methods that automatically learn the features of nodes have emerged, such as node2vec and struc2vec, which have been used in tasks such as node classification, link prediction, and node clustering, mainly in the social network domain. There are also other embedding methods that explicitly look at the connections between nodes, i.e., the nodes’ network neighborhoods, such as graphlets. Graphlets have been used in many tasks such as network comparison, link prediction, and network clustering, mainly in the computational biology domain. Even though the two types of embedding methods (node2vec/struct2vec versus graphlets) have a similar goal – to represent nodes as features vectors, no comparisons have been made between them, possibly because they have originated in the different domains. Therefore, in this study, we compare graphlets to node2vec and struc2vec, and we do so in the task of network alignment. In evaluations on synthetic and real-world biological networks, we find that graphlets are both more accurate and faster than node2vec and struc2vec.",
"title": ""
},
{
"docid": "e69dd688041be302ce973e22457622f9",
"text": "In the last decade, supervised deep learning approaches have been extensively employed in visual odometry (VO) applications, which is not feasible in environments where labelled data is not abundant. On the other hand, unsupervised deep learning approaches for localization and mapping in unknown environments from unlabelled data have received comparatively less attention in VO research. In this study, we propose a generative unsupervised learning framework that predicts 6-DoF pose camera motion and monocular depth map of the scene from unlabelled RGB image sequences, using deep convolutional Generative Adversarial Networks (GANs). We create a supervisory signal by warping view sequences and assigning the re-projection minimization to the objective loss function that is adopted in multi-view pose estimation and single-view depth generation network. Detailed quantitative and qualitative evaluations of the proposed framework on the KITTI [1] and Cityscapes [2] datasets show that the proposed method outperforms both existing traditional and unsupervised deep VO methods providing better results for both pose estimation and depth recovery.",
"title": ""
},
{
"docid": "0a43496b7fbfeb54a6283fcac438d5dc",
"text": "Enterprise Resource Planning (ERP) has come to mean many things over the last several decades. Divergent applications by practitioners and academics, as well as by researchers in alternative fields of study, has allowed for both considerable proliferation of information on the topic but also for a considerable amount of confusion regarding the meaning of the term. In reviewing ERP research two distinct research streams emerge. The first focuses on the fundamental corporate capabilities driving ERP as a strategic concept. A second stream focuses on the details associated with implementing information systems and their relative success and cost. This paper briefly discusses these research streams and suggests some ideas for related future research. Published in the European Journal of Operational Research 146(2), 2003",
"title": ""
},
{
"docid": "893e1e17570e5daa83827d91b1503185",
"text": "We introduce a similarity-based machine learning approach for detecting non-market, adversarial, malicious Android apps. By adversarial, we mean those apps designed to avoid detection. Our approach relies on identifying the Android applications that are similar to an adversarial known Android malware. In our approach, similarity is detected statically by computing the similarity score between two apps based on their methods similarity. The similarity between methods is computed using the normalized compression distance (NCD) in dependence of either zlib or bz2 compressors. The NCD calculates the semantic similarity between pair of methods in two compared apps. The first app is one of the sample apps in the input dataset, while the second app is one of malicious apps stored in a malware database. Later all the computed similarity scores are used as features for training a supervised learning classifier to detect suspicious apps with high similarity score to the malicious ones in the database.",
"title": ""
},
{
"docid": "c51cb80a1a5afe25b16a5772ccee0e6b",
"text": "Face perception relies on computations carried out in face-selective cortical areas. These areas have been intensively investigated for two decades, and this work has been guided by an influential neural model suggested by Haxby and colleagues in 2000. Here, we review new findings about face-selective areas that suggest the need for modifications and additions to the Haxby model. We suggest a revised framework based on (a) evidence for multiple routes from early visual areas into the face-processing system, (b) information about the temporal characteristics of these areas, (c) indications that the fusiform face area contributes to the perception of changeable aspects of faces, (d) the greatly elevated responses to dynamic compared with static faces in dorsal face-selective brain areas, and (e) the identification of three new anterior face-selective areas. Together, these findings lead us to suggest that face perception depends on two separate pathways: a ventral stream that represents form information and a dorsal stream driven by motion and form information.",
"title": ""
},
{
"docid": "7d4707e90adb42c75b4f84b10fce65c3",
"text": "Sleep is a complex phenomenon that could be understood and assessed at many levels. Sleep could be described at the behavioral level (relative lack of movements and awareness and responsiveness) and at the brain level (based on EEG activity). Sleep could be characterized by its duration, by its distribution during the 24-hr day period, and by its quality (e.g., consolidated versus fragmented). Different methods have been developed to assess various aspects of sleep. This chapter covers the most established and common methods used to assess sleep in infants and children. These methods include polysomnography, videosomnography, actigraphy, direct observations, sleep diaries, and questionnaires. The advantages and disadvantages of each method are highlighted.",
"title": ""
},
{
"docid": "b8377cba1fe8bca54e12b3c707d3cbaf",
"text": "The structure of foot-and-mouth disease virus has been determined at close to atomic resolution by X-ray diffraction without experimental phase information. The virus shows similarities with other picornaviruses but also several unique features. The canyon or pit found in other picornaviruses is absent; this has important implications for cell attachment. The most immunogenic portion of the capsid, which acts as a potent peptide vaccine, forms a disordered protrusion on the virus surface.",
"title": ""
},
{
"docid": "af0a1a8af70423ec09e0bb1e47f2e3f6",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants.",
"title": ""
},
{
"docid": "f81430ff3be528c891262ddb8a730699",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a study of 11 widely used internal clustering validation measures for crisp clustering. The results of this study indicate that these existing measures have certain limitations in different application scenarios. As an alternative choice, we propose a new internal clustering validation measure, named clustering validation index based on nearest neighbors (CVNN), which is based on the notion of nearest neighbors. This measure can dynamically select multiple objects as representatives for different clusters in different situations. Experimental results show that CVNN outperforms the existing measures on both synthetic data and real-world data in different application scenarios.",
"title": ""
},
{
"docid": "88c1ab7e817118ee01fb28bf32ed2e23",
"text": "Field experiment was conducted on fodder maize to explore the potential of integrated use of chemical, organic and biofertilizers for improving maize growth, beneficial microflora in the rhizosphere and the economic returns. The treatments were designed to make comparison of NPK fertilizer with different combinations of half dose of NP with organic and biofertilizers viz. biological potassium fertilizer (BPF), Biopower, effective microorganisms (EM) and green force compost (GFC). Data reflected maximum crop growth in terms of plant height, leaf area and fresh biomass with the treatment of full NPK; and it was followed by BPF+full NP. The highest uptake of NPK nutrients by crop was recorded as: N under half NP+Biopower; P in BPF+full NP; and K from full NPK. The rhizosphere microflora enumeration revealed that Biopower+EM applied along with half dose of GFC soil conditioner (SC) or NP fertilizer gave the highest count of N-fixing bacteria (Azotobacter, Azospirillum, Azoarcus andZoogloea). Regarding the P-solubilizing bacteria,Bacillus was having maximum population with Biopower+BPF+half NP, andPseudomonas under Biopower+EM+half NP treatment. It was concluded that integration of half dose of NP fertilizer with Biopower+BPF / EM can give similar crop yield as with full rate of NP fertilizer; and through reduced use of fertilizers the production cost is minimized and the net return maximized. However, the integration of half dose of NP fertilizer with biofertilizers and compost did not give maize fodder growth and yield comparable to that from full dose of NPK fertilizers.",
"title": ""
}
] | scidocsrr |
bf196c07caa42433785f19ffcfa75c80 | Artificial Neural Networks ’ Applications in Management | [
{
"docid": "267f3d176f849bf24dfab7e78d93b153",
"text": "The long-running debate between the ‘rational design’ and ‘emergent process’ schools of strategy formation has involved caricatures of firms’ strategic planning processes, but little empirical evidence of whether and how companies plan. Despite the presumption that environmental turbulence renders conventional strategic planning all but impossible, the evidence from the corporate sector suggests that reports of the demise of strategic planning are greatly exaggerated. The goal of this paper is to fill this empirical gap by describing the characteristics of the strategic planning systems of multinational, multibusiness companies faced with volatile, unpredictable business environments. In-depth case studies of the planning systems of eight of the world’s largest oil companies identified fundamental changes in the nature and role of strategic planning since the end of the 1970s. The findings point to a possible reconciliation of ‘design’ and ‘process’ approaches to strategy formulation. The study pointed to a process of planned emergence in which strategic planning systems provided a mechanism for coordinating decentralized strategy formulation within a structure of demanding performance targets and clear corporate guidelines. The study shows that these planning systems fostered adaptation and responsiveness, but showed limited innovation and analytical sophistication. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
}
] | [
{
"docid": "7456842efeebb480c21974f78aea2a9f",
"text": "Connectionist networks that have learned one task can be reused on related tasks in a process that is called \"transfer\". This paper surveys recent work on transfer. A number of distinctions between kinds of transfer are identified, and future directions for research are explored. The study of transfer has a long history in cognitive science. Discoveries about transfer in human cognition can inform applied efforts. Advances in applications can also inform cognitive studies.",
"title": ""
},
{
"docid": "b1202b110ae83980a71b14d9d6fd65cb",
"text": "In modern daily life people need to move, whether in business or leisure, sightseeing or addressing a meeting. Often this is done in familiar environments, but in some cases we need to find our way in unfamiliar scenarios. Visual impairment is a factor that greatly reduces mobility. Currently, the most widespread and used means by the visually impaired people are the white stick and the guide dog; however both present some limitations. With the recent advances in inclusive technology it is possible to extend the support given to people with visual impairment during their mobility. In this context we propose a system, named SmartVision, whose global objective is to give blind users the ability to move around in unfamiliar environments, whether indoor or outdoor, through a user friendly interface that is fed by a geographic information system (GIS). In this paper we propose the development of an electronic white cane that helps moving around, in both indoor and outdoor environments, providing contextualized geographical information using RFID technology.",
"title": ""
},
{
"docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "5d154a62b22415cbedd165002853315b",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "5bb63d07c8d7c743c505e6fd7df3dc4f",
"text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.",
"title": ""
},
{
"docid": "5eea47089f84c915005c40547712c617",
"text": "Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.",
"title": ""
},
{
"docid": "d2d16580335dcff2f0d05ca8a43438ef",
"text": "Evolutionary adaptation can be rapid and potentially help species counter stressful conditions or realize ecological opportunities arising from climate change. The challenges are to understand when evolution will occur and to identify potential evolutionary winners as well as losers, such as species lacking adaptive capacity living near physiological limits. Evolutionary processes also need to be incorporated into management programmes designed to minimize biodiversity loss under rapid climate change. These challenges can be met through realistic models of evolutionary change linked to experimental data across a range of taxa.",
"title": ""
},
{
"docid": "7304805b7f5f8d22ef9f3ce02f8954e6",
"text": "A novel inductor switching technique is used to design and implement a wideband LC voltage controlled oscillator (VCO) in 0.13µm CMOS. The VCO has a tuning range of 87.2% between 3.3 and 8.4 GHz with phase noise ranging from −122 to −117.2 dBc/Hz at 1MHz offset. The power varies between 6.5 and 15.4 mW over the tuning range. This results in a Power-Frequency-Tuning Normalized figure of merit (PFTN) between 6.6 and 10.2 dB which is one of the best reported to date.",
"title": ""
},
{
"docid": "c1ee5f717481652d91431f647401d6d2",
"text": "Cluster ensembles have recently emerged as a powerful alternative to standard cluster analysis, aggregating several input data clusterings to generate a single output clustering, with improved robustness and stability. From the early work, these techniques held great promise; however, most of them generate the final solution based on incomplete information of a cluster ensemble. The underlying ensemble-information matrix reflects only cluster-data point relations, while those among clusters are generally overlooked. This paper presents a new link-based approach to improve the conventional matrix. It achieves this using the similarity between clusters that are estimated from a link network model of the ensemble. In particular, three new link-based algorithms are proposed for the underlying similarity assessment. The final clustering result is generated from the refined matrix using two different consensus functions of feature-based and graph-based partitioning. This approach is the first to address and explicitly employ the relationship between input partitions, which has not been emphasized by recent studies of matrix refinement. The effectiveness of the link-based approach is empirically demonstrated over 10 data sets (synthetic and real) and three benchmark evaluation measures. The results suggest the new approach is able to efficiently extract information embedded in the input clusterings, and regularly illustrate higher clustering quality in comparison to several state-of-the-art techniques.",
"title": ""
},
{
"docid": "c435c4106b1b5c90fe3ff607bc0d5f00",
"text": "In recent years, we have witnessed a significant growth of “social computing” services, or online communities where users contribute content in various forms, including images, text or video. Content contribution from members is critical to the viability of these online communities. It is therefore important to understand what drives users to share content with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with users’ photo sharing in an online community, drawing on motivation theories as well as on analysis of basic structural properties. Our results indicate that photo sharing declines in respect to the users’ tenure in the community. We also show that users with higher commitment to the community and greater “structural embeddedness” tend to share more content. We demonstrate that the motivation of self-development is negatively related to photo sharing, and that tenure in the community moderates the effect of self-development on photo sharing. Directions for future research, as well as implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "7e97f234801829afff4d11686428f59f",
"text": "Prior research has linked mindfulness to improvements in attention, and suggested that the effects of mindfulness are particularly pronounced when individuals are cognitively depleted or stressed. Yet, no studies have tested whether mindfulness improves declarative awareness of unexpected stimuli in goal-directed tasks. Participants (N=794) were either depleted (or not) and subsequently underwent a brief mindfulness induction (or not). They then completed an inattentional blindness task during which an unexpected distractor appeared on the computer monitor. This task was used to assess declarative conscious awareness of the unexpected distractor's presence and the extent to which its perceptual properties were encoded. Mindfulness increased awareness of the unexpected distractor (i.e., reduced rates of inattentional blindness). Contrary to predictions, no mindfulness×depletion interaction emerged. Depletion however, increased perceptual encoding of the distractor. These results suggest that mindfulness may foster awareness of unexpected stimuli (i.e., reduce inattentional blindness).",
"title": ""
},
{
"docid": "c721f79d7c20210b4ee388ecb75f241f",
"text": "The noble aim behind this project is to study and capture the Natural Eye movement detection and trying to apply it as assisting application for paralyzed patients those who cannot speak or use hands such disease as amyotrophic lateral sclerosis (ALS), Guillain-Barre Syndrome, quadriplegia & heniiparesis. Using electrophySiological genereted by the voluntary contradictions of the muscles around the eye. The proposed system which is based on the design and application of an electrooculogram (EOG) based an efficient human–computer interface (HCI). Establishing an alternative channel without speaking and hand movements is important in increasing the quality of life for the handicapped. EOG-based systems are more efficient than electroencephalogram (EEG)-based systems as easy acquisition, higher amplitude, and also easily classified. By using a realized virtual keyboard like graphical user interface, it is possible to notify in writing the needs of the patient in a relatively short time. Considering the bio potential measurement pitfalls, the novel EOG-based HCI system allows people to successfully communicate with their environment by using only eye movements. [1] Classifying horizontal and vertical EOG channel signals in an efficient interface is realized in this study. The nearest neighbourhood algorithm will be use to classify the signals. The novel EOG-based HCI system allows people to successfully and economically communicate with their environment by using only eye movements. [2] An Electrooculography is a method of tracking the ocular movement, based on the voltage changes that occur due to the medications on the special orientation of the eye dipole. The resulting signal has a myriad of possible applications. [2] In this dissertation phase one, the goal was to study the Eye movements and respective signal generation, EOG signal acquisition and also study of a Man-Machine Interface that made use of this signal. As per our goal we studied eye movements and design simple EOG acquisition circuit. We got efficient signal output in oscilloscope. I sure that result up to present stage will definitely leads us towards designing of novel assisting device for paralyzed patients. Thus, we set out to create an interface will be use by mobility impaired patients, allowing them to use their eyes to call nurse or attended person and some other requests. Keywords— Electro Oculogram, Natural Eye movement Detection, EOG acquisition & signal conditioning, Eye based Computer interface GUI, Paralysed assisting device, Eye movement recognization",
"title": ""
},
{
"docid": "67c8047fbb9e027f92910c4a4f93347a",
"text": "Mastocytosis is a rare, heterogeneous disease of complex etiology, characterized by a marked increase in mast cell density in the skin, bone marrow, liver, spleen, gastrointestinal mucosa and lymph nodes. The most frequent site of organ involvement is the skin. Cutaneous lesions include urticaria pigmentosa, mastocytoma, diffuse and erythematous cutaneous mastocytosis, and telangiectasia macularis eruptiva perstans. Human mast cells originate from CD34 progenitors, under the influence of stem cell factor (SCF); a substantial number of patients exhibit activating mutations in c-kit, the receptor for SCF. Mast cells can synthesize a variety of cytokines that could affect the skeletal system, increasing perforating bone resorption and leading to osteoporosis. The coexistence of hematologic disorders, such as myeloproliferative or myelodysplastic syndromes, or of lymphoreticular malignancies, is common. Compared with radiographs, Tc-99m methylenediphosphonate (MDP) scintigraphy is better able to show the widespread skeletal involvement in patients with diffuse disease. T1-weighted MR imaging is a sensitive technique for detecting marrow abnormalities in patients with systemic mastocytosis, showing several different patterns of marrow involvement. We report the imaging findings a 36-year old male with well-documented urticaria pigmentosa. In order to evaluate mastocytic bone marrow involvement, 99mTc-MDP scintigraphy, T1-weighted spin echo and short tau inversion recovery MRI at 1.0 T, were performed. Both scan findings were consistent with marrow hyperactivity. Thus, the combined use of bone scan and MRI may be useful in order to recognize marrow involvement in suspected systemic mastocytosis, perhaps avoiding bone biopsy.",
"title": ""
},
{
"docid": "6a3cc8319b7a195ce7ec05a70ad48c7a",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "85cf0bddbedc5836f41033a16274c1e2",
"text": "Intuitively, for a training sample xi with its associated label yi, a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying xi, which becomes easier as the higher layers distill xi into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more information about the ground truth, but this would be incorrect.",
"title": ""
},
{
"docid": "6f0faf1a90d9f9b19fb2e122a26a0f77",
"text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "5d35e34a5db727917e5105f857c174be",
"text": "Human face feature extraction using digital images is a vital element for several applications such as: identification and facial recognition, medical application, video games, cosmetology, etc. The skin pores are very important element of the structure of the skin. A novelty method is proposed allowing decomposing an photography of human face from digital image (RGB) in two layers, melanin and hemoglobin. From melanin layer, the main pores from the face can be obtained, as well as the centroids of each of them. It has been found that the pore configuration of the skin is invariant and unique for each individual. Therefore, from the localization of the pores of a human face, it is a possibility to use them for diverse application in the fields of pattern",
"title": ""
},
{
"docid": "9779a5ac2ada20f0ccd5751b0784e9cc",
"text": "Early-stage romantic love can induce euphoria, is a cross-cultural phenomenon, and is possibly a developed form of a mammalian drive to pursue preferred mates. It has an important influence on social behaviors that have reproductive and genetic consequences. To determine which reward and motivation systems may be involved, we used functional magnetic resonance imaging and studied 10 women and 7 men who were intensely \"in love\" from 1 to 17 mo. Participants alternately viewed a photograph of their beloved and a photograph of a familiar individual, interspersed with a distraction-attention task. Group activation specific to the beloved under the two control conditions occurred in dopamine-rich areas associated with mammalian reward and motivation, namely the right ventral tegmental area and the right postero-dorsal body and medial caudate nucleus. Activation in the left ventral tegmental area was correlated with facial attractiveness scores. Activation in the right anteromedial caudate was correlated with questionnaire scores that quantified intensity of romantic passion. In the left insula-putamen-globus pallidus, activation correlated with trait affect intensity. The results suggest that romantic love uses subcortical reward and motivation systems to focus on a specific individual, that limbic cortical regions process individual emotion factors, and that there is localization heterogeneity for reward functions in the human brain.",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] | scidocsrr |
Subsets and Splits